首页 > 最新文献

Information Systems最新文献

英文 中文
Two-level massive string dictionaries 两级海量字符串词典
IF 3 2区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-11-08 DOI: 10.1016/j.is.2024.102490
Paolo Ferragina, Mariagiovanna Rotundo, Giorgio Vinciguerra
We study the problem of engineering space–time efficient data structures that support membership and rank queries on very large static dictionaries of strings.
Our solution is based on a very simple approach that decouples string storage and string indexing by means of a block-wise compression of the sorted dictionary strings (to be stored in external memory) and a succinct implementation of a Patricia trie (to be stored in internal memory) built on the first string of each block. On top of this, we design an in-memory cache that, given a sample of the query workload, augments the Patricia trie with additional information to reduce the number of I/Os of future queries.
Our experimental evaluation on two new datasets, which are at least one order of magnitude larger than the ones used in the literature, shows that (i) the state-of-the-art compressed string dictionaries, compared to Patricia tries, do not provide significant benefits when used in a large-scale indexing setting, and (ii) our two-level approach enables the indexing and storage of 3.5 billion strings taking 273 GB in just less than 200 MB of internal memory and 83 GB of compressed disk space, while still guaranteeing comparable or faster query performance than those offered by array-based solutions used in modern storage systems, such as RocksDB, thus possibly influencing their future design.
我们的解决方案基于一种非常简单的方法,即通过分块压缩排序字典字符串(存储在外部内存中)以及在每个分块的第一个字符串上简洁地实现帕特里夏三元组(存储在内部内存中),将字符串存储和字符串索引分离开来。在此基础上,我们设计了一个内存缓存,在给定查询工作量样本的情况下,利用附加信息增强 Patricia 三元组,以减少未来查询的 I/O 次数。我们在两个新数据集上进行的实验评估表明:(i) 与 Patricia tries 相比,最先进的压缩字符串字典在大规模索引设置中使用时没有显著优势;(ii) 我们的双层方法能够索引和存储 35 亿个字符串,耗时 273 GB。(ii) 我们的双层方法只需不到 200 MB 的内部内存和 83 GB 的压缩磁盘空间,就能索引和存储 35 亿条字符串,总容量达 273 GB,同时还能保证查询性能与 RocksDB 等现代存储系统中使用的基于阵列的解决方案相当或更快,从而可能影响其未来的设计。
{"title":"Two-level massive string dictionaries","authors":"Paolo Ferragina,&nbsp;Mariagiovanna Rotundo,&nbsp;Giorgio Vinciguerra","doi":"10.1016/j.is.2024.102490","DOIUrl":"10.1016/j.is.2024.102490","url":null,"abstract":"<div><div>We study the problem of engineering space–time efficient data structures that support membership and rank queries on <em>very</em> large static dictionaries of strings.</div><div>Our solution is based on a very simple approach that decouples string storage and string indexing by means of a block-wise compression of the sorted dictionary strings (to be stored in external memory) and a succinct implementation of a Patricia trie (to be stored in internal memory) built on the first string of each block. On top of this, we design an in-memory cache that, given a sample of the query workload, augments the Patricia trie with additional information to reduce the number of I/Os of future queries.</div><div>Our experimental evaluation on two new datasets, which are at least one order of magnitude larger than the ones used in the literature, shows that (i) the state-of-the-art compressed string dictionaries, compared to Patricia tries, do not provide significant benefits when used in a large-scale indexing setting, and (ii) our two-level approach enables the indexing and storage of 3.5 billion strings taking 273 GB in just less than 200 MB of internal memory and 83 GB of compressed disk space, while still guaranteeing comparable or faster query performance than those offered by array-based solutions used in modern storage systems, such as RocksDB, thus possibly influencing their future design.</div></div>","PeriodicalId":50363,"journal":{"name":"Information Systems","volume":"128 ","pages":"Article 102490"},"PeriodicalIF":3.0,"publicationDate":"2024-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142660695","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A generative and discriminative model for diversity-promoting recommendation 促进多样性推荐的生成和判别模型
IF 3 2区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-11-06 DOI: 10.1016/j.is.2024.102488
Yuli Liu
Diversity-promoting recommender systems with the goal of recommending diverse and relevant results to users, have received significant attention. However, current studies often face a trade-off: they either recommend highly accurate but homogeneous items or boost diversity at the cost of relevance, making it challenging for users to find truly satisfying recommendations that meet both their obvious and potential needs. To overcome this competitive trade-off, we introduce a unified framework that simultaneously leverages a discriminative model and a generative model. This approach allows us to adjust the focus of learning dynamically. Specifically, our framework uses Variational Graph Auto-Encoders to enhance the diversity of recommendations, while Graph Convolution Networks are employed to ensure high accuracy in predicting user preferences. This dual focus enables our system to deliver recommendations that are both diverse and closely aligned with user interests. Inspired by the quality vs. diversity decomposition of Determinantal Point Process (DPP) kernel, we design the DPP likelihood-based loss function as the joint modeling loss. Extensive experiments on three real-world datasets, demonstrating that the unified framework goes beyond quality-diversity trade-off, i.e., instead of sacrificing accuracy for promoting diversity, the joint modeling actually boosts both metrics.
以向用户推荐多样化的相关结果为目标的多样性促进推荐系统受到了广泛关注。然而,目前的研究往往面临一个权衡问题:它们要么推荐高度准确但同质化的项目,要么以相关性为代价提高多样性,从而使用户难以找到真正满意的推荐,既满足其显而易见的需求,又满足其潜在的需求。为了克服这种竞争性权衡,我们引入了一个统一的框架,同时利用判别模型和生成模型。这种方法允许我们动态调整学习重点。具体来说,我们的框架使用变异图自动编码器来增强推荐的多样性,同时使用图卷积网络来确保预测用户偏好的高准确性。这种双重关注使我们的系统能够提供既多样化又与用户兴趣密切相关的推荐。受确定点过程(DPP)核的质量与多样性分解的启发,我们设计了基于 DPP 概率的损失函数作为联合建模损失。在三个真实世界数据集上进行的广泛实验表明,统一框架超越了质量与多样性之间的权衡,也就是说,联合建模非但不会为促进多样性而牺牲准确性,反而会提高这两个指标。
{"title":"A generative and discriminative model for diversity-promoting recommendation","authors":"Yuli Liu","doi":"10.1016/j.is.2024.102488","DOIUrl":"10.1016/j.is.2024.102488","url":null,"abstract":"<div><div>Diversity-promoting recommender systems with the goal of recommending diverse and relevant results to users, have received significant attention. However, current studies often face a trade-off: they either recommend highly accurate but homogeneous items or boost diversity at the cost of relevance, making it challenging for users to find truly satisfying recommendations that meet both their obvious and potential needs. To overcome this competitive trade-off, we introduce a unified framework that simultaneously leverages a discriminative model and a generative model. This approach allows us to adjust the focus of learning dynamically. Specifically, our framework uses Variational Graph Auto-Encoders to enhance the diversity of recommendations, while Graph Convolution Networks are employed to ensure high accuracy in predicting user preferences. This dual focus enables our system to deliver recommendations that are both diverse and closely aligned with user interests. Inspired by the quality <em>vs.</em> diversity decomposition of Determinantal Point Process (DPP) kernel, we design the DPP likelihood-based loss function as the joint modeling loss. Extensive experiments on three real-world datasets, demonstrating that the unified framework goes beyond quality-diversity trade-off, <em>i.e.</em>, instead of sacrificing accuracy for promoting diversity, the joint modeling actually boosts both metrics.</div></div>","PeriodicalId":50363,"journal":{"name":"Information Systems","volume":"128 ","pages":"Article 102488"},"PeriodicalIF":3.0,"publicationDate":"2024-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142660690","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Soundness unknotted: An efficient soundness checking algorithm for arbitrary cyclic process models by loosening loops 不打结的健全性:通过松散循环对任意循环过程模型进行高效健全性检查的算法
IF 3 2区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-10-23 DOI: 10.1016/j.is.2024.102476
Thomas M. Prinz , Yongsun Choi , N. Long Ha
Although domain experts usually create business process models, these models can still contain errors. For this reason, research and practice establish criteria for process models to provide confidence in the correctness or correct behavior of processes. One widespread criterion is soundness, which guarantees the absence of deadlocks and lacks of synchronization. Checking soundness of process models is not trivial. However, cyclic process models additionally increase the complexity to check soundness. This paper presents a novel approach for verifying soundness that has an efficient cubic worst-case runtime behavior, even for arbitrary cyclic process models. This approach relies on three key techniques — loop conversion, loop reduction, and loop decomposition — to convert any cyclic process model into a set of acyclic process models. Using this approach, we have developed five straightforward rules to verify the soundness, reusing existing approaches for checking soundness of acyclic models.
尽管领域专家通常会创建业务流程模型,但这些模型仍可能包含错误。因此,研究和实践为流程模型建立了标准,以提供对流程正确性或正确行为的信心。其中一个广泛使用的标准是健全性,它能保证没有死锁和缺乏同步。检查流程模型的健全性并非易事。然而,循环流程模型会额外增加检查合理性的复杂性。本文提出了一种验证健全性的新方法,这种方法具有高效的立方最坏运行时行为,即使对于任意循环流程模型也是如此。这种方法依靠三种关键技术--循环转换、循环缩减和循环分解--将任意循环流程模型转换为一组非循环流程模型。利用这种方法,我们开发了五种简单明了的规则来验证其合理性,并重复使用现有的方法来检查非循环模型的合理性。
{"title":"Soundness unknotted: An efficient soundness checking algorithm for arbitrary cyclic process models by loosening loops","authors":"Thomas M. Prinz ,&nbsp;Yongsun Choi ,&nbsp;N. Long Ha","doi":"10.1016/j.is.2024.102476","DOIUrl":"10.1016/j.is.2024.102476","url":null,"abstract":"<div><div>Although domain experts usually create business process models, these models can still contain errors. For this reason, research and practice establish criteria for process models to provide confidence in the correctness or correct behavior of processes. One widespread criterion is soundness, which guarantees the absence of deadlocks and lacks of synchronization. Checking soundness of process models is not trivial. However, cyclic process models additionally increase the complexity to check soundness. This paper presents a novel approach for verifying soundness that has an efficient cubic worst-case runtime behavior, even for arbitrary cyclic process models. This approach relies on three key techniques — loop conversion, loop reduction, and loop decomposition — to convert any cyclic process model into a set of acyclic process models. Using this approach, we have developed five straightforward rules to verify the soundness, reusing existing approaches for checking soundness of acyclic models.</div></div>","PeriodicalId":50363,"journal":{"name":"Information Systems","volume":"128 ","pages":"Article 102476"},"PeriodicalIF":3.0,"publicationDate":"2024-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142578535","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The composition diagram of a complex process: Enhancing understanding of hierarchical business processes 复杂流程的组成图:加强对分层业务流程的理解
IF 3 2区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-10-21 DOI: 10.1016/j.is.2024.102489
Pavol Jurik , Peter Schmidt , Martin Misut , Ivan Brezina , Marian Reiff
The article presents the Composition Diagram of a Complex Process (CDCP), a new diagramming method for modelling business processes with complex vertical structures. This Method addresses the limitations of traditional modelling techniques such as BPMN, Activity Diagrams (AD), and Event-Driven Process Chains (EPC).
The experiment was carried out on 277 students from different study programs and grades to determine the effectiveness of the methods. The main objective was to evaluate the usability and effectiveness of CDCP compared to established methods, focusing on two primary tasks: interpretation and diagram creation. The participant's performance was evaluated based on the objective results of the tasks and the subjective feedback of the questionnaire. The results indicate that CDCP was the effective method for the reading and drawing tasks, outperforming BPMN and EPC in terms of understanding and ease of use. Statistical analysis of variance showed that while the year of the study did not significantly affect performance, the study program and Method used had a significant effect. These findings highlight the potential of CDCP as a more accessible and intuitive business process modelling tool, even for users with minimal prior experience.
文章介绍了复杂流程组合图(Composition Diagram of a Complex Process,CDCP),这是一种新的图示方法,用于为具有复杂垂直结构的业务流程建模。该方法解决了 BPMN、活动图 (AD) 和事件驱动流程链 (EPC) 等传统建模技术的局限性。主要目的是评估 CDCP 与既有方法相比的可用性和有效性,重点关注两项主要任务:解释和创建图表。根据任务的客观结果和问卷的主观反馈,对参与者的表现进行了评估。结果表明,CDCP 是阅读和绘制任务的有效方法,在理解和易用性方面优于 BPMN 和 EPC。方差统计分析显示,虽然学习年份对成绩没有显著影响,但所使用的学习程序和方法却有显著影响。这些研究结果凸显了 CDCP 作为一种更易用、更直观的业务流程建模工具的潜力,即使是对没有多少经验的用户来说也是如此。
{"title":"The composition diagram of a complex process: Enhancing understanding of hierarchical business processes","authors":"Pavol Jurik ,&nbsp;Peter Schmidt ,&nbsp;Martin Misut ,&nbsp;Ivan Brezina ,&nbsp;Marian Reiff","doi":"10.1016/j.is.2024.102489","DOIUrl":"10.1016/j.is.2024.102489","url":null,"abstract":"<div><div>The article presents the Composition Diagram of a Complex Process (CDCP), a new diagramming method for modelling business processes with complex vertical structures. This Method addresses the limitations of traditional modelling techniques such as BPMN, Activity Diagrams (AD), and Event-Driven Process Chains (EPC).</div><div>The experiment was carried out on 277 students from different study programs and grades to determine the effectiveness of the methods. The main objective was to evaluate the usability and effectiveness of CDCP compared to established methods, focusing on two primary tasks: interpretation and diagram creation. The participant's performance was evaluated based on the objective results of the tasks and the subjective feedback of the questionnaire. The results indicate that CDCP was the effective method for the reading and drawing tasks, outperforming BPMN and EPC in terms of understanding and ease of use. Statistical analysis of variance showed that while the year of the study did not significantly affect performance, the study program and Method used had a significant effect. These findings highlight the potential of CDCP as a more accessible and intuitive business process modelling tool, even for users with minimal prior experience.</div></div>","PeriodicalId":50363,"journal":{"name":"Information Systems","volume":"128 ","pages":"Article 102489"},"PeriodicalIF":3.0,"publicationDate":"2024-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142593300","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Emerging industry classification based on BERT model 基于 BERT 模型的新兴产业分类
IF 3 2区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-10-16 DOI: 10.1016/j.is.2024.102484
Baocheng Yang , Bing Zhang , Kevin Cutsforth , Shanfu Yu , Xiaowen Yu
Accurate industry classification is central to economic analysis and policy making. Current classification systems, while foundational, exhibit limitations in the face of the exponential growth of big data. These limitations include subjectivity, leading to inconsistencies and misclassifications. To overcome these shortcomings, this paper focuses on utilizing the BERT model for classifying emerging industries through the identification of salient attributes within business descriptions. The proposed method identifies clusters of firms within distinct industries, thereby transcending the restrictions inherent in existing classification systems. The model exhibits an impressive degree of precision in categorizing business descriptions, achieving accuracy rates spanning from 84.11% to 99.66% across all 16 industry classifications. This research enriches the field of industry classification literature through a practical examination of the efficacy of machine learning techniques. Our experiments achieved strong performance, highlighting the effectiveness of the BERT model in accurately classifying and identifying emerging industries, providing valuable insights for industry analysts and policymakers.
准确的行业分类是经济分析和政策制定的核心。当前的分类系统虽然具有基础性,但在大数据呈指数增长的情况下却表现出局限性。这些局限性包括主观性,导致不一致和错误分类。为了克服这些缺陷,本文重点利用 BERT 模型,通过识别业务描述中的突出属性来对新兴产业进行分类。所提出的方法可识别不同行业内的企业集群,从而突破现有分类系统的固有限制。该模型在对企业描述进行分类时表现出令人印象深刻的精确度,在所有 16 个行业分类中达到了 84.11% 到 99.66% 的准确率。这项研究通过对机器学习技术功效的实际检验,丰富了行业分类文献领域。我们的实验取得了优异的成绩,凸显了 BERT 模型在准确分类和识别新兴产业方面的有效性,为产业分析师和政策制定者提供了有价值的见解。
{"title":"Emerging industry classification based on BERT model","authors":"Baocheng Yang ,&nbsp;Bing Zhang ,&nbsp;Kevin Cutsforth ,&nbsp;Shanfu Yu ,&nbsp;Xiaowen Yu","doi":"10.1016/j.is.2024.102484","DOIUrl":"10.1016/j.is.2024.102484","url":null,"abstract":"<div><div>Accurate industry classification is central to economic analysis and policy making. Current classification systems, while foundational, exhibit limitations in the face of the exponential growth of big data. These limitations include subjectivity, leading to inconsistencies and misclassifications. To overcome these shortcomings, this paper focuses on utilizing the BERT model for classifying emerging industries through the identification of salient attributes within business descriptions. The proposed method identifies clusters of firms within distinct industries, thereby transcending the restrictions inherent in existing classification systems. The model exhibits an impressive degree of precision in categorizing business descriptions, achieving accuracy rates spanning from 84.11% to 99.66% across all 16 industry classifications. This research enriches the field of industry classification literature through a practical examination of the efficacy of machine learning techniques. Our experiments achieved strong performance, highlighting the effectiveness of the BERT model in accurately classifying and identifying emerging industries, providing valuable insights for industry analysts and policymakers.</div></div>","PeriodicalId":50363,"journal":{"name":"Information Systems","volume":"128 ","pages":"Article 102484"},"PeriodicalIF":3.0,"publicationDate":"2024-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142529245","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ExamGuard: Smart contracts for secure online test ExamGuard:用于安全在线测试的智能合约
IF 3 2区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-10-16 DOI: 10.1016/j.is.2024.102485
Mayuri Diwakar Kulkarni, Ashish Awate, Makarand Shahade, Bhushan Nandwalkar
The education sector is currently experiencing profound changes, primarily driven by the widespread adoption of online platforms for conducting examinations. This paper delves into the utilization of smart contracts as a means to revolutionize the monitoring and execution of online examinations, thereby guaranteeing the traceability of evaluation data and examinee activities. In this context, the integration of advanced technologies such as the PoseNet algorithm, derived from the TensorFlow Model, emerges as a pivotal component. By leveraging PoseNet, the system adeptly identifies both single and multiple faces of examinees, thereby ensuring the authenticity and integrity of examination sessions. Moreover, the incorporation of the COCO dataset facilitates the recognition of objects within examination environments, further bolstering the system's capabilities in monitoring examinee activities.of paramount importance is the secure storage of evidence collected during examinations, a task efficiently accomplished through the implementation of the blockchain technology. This platform not only ensures the immutability of data but also safeguards against potential instances of tampering, thereby upholding the credibility of examination results. Through the utilization of smart contracts, the proposed framework not only streamlines the examination process but also instills transparency and integrity, thereby addressing inherent challenges encountered in traditional examination methods. One of the key advantages of this technological integration lies in its ability to modernize examination procedures while concurrently reinforcing trust and accountability within the educational assessment ecosystem. By harnessing the power of smart contracts, educational institutions can mitigate concerns pertaining to data manipulation and malpractice, thereby fostering a more secure and reliable examination environment. Furthermore, the transparency afforded by blockchain technology ensures that examination outcomes are verifiable and auditable, instilling confidence among stakeholders and enhancing the overall credibility of the assessment process. In conclusion, the adoption of smart contracts represents a paradigm shift in the realm of educational assessment, offering a comprehensive solution to the challenges posed by traditional examination methods. By embracing advanced technologies such as PoseNet and blockchain, educational institutions can not only streamline examination procedures but also uphold the highest standards of integrity and accountability. As such, the integration of smart contracts holds immense potential in shaping the future of online examinations, paving the way for a more efficient, transparent, and trustworthy assessment ecosystem.
目前,教育领域正在经历一场深刻的变革,其主要驱动力是在线考试平台的广泛应用。本文将深入探讨如何利用智能合约来彻底改变在线考试的监控和执行,从而保证评估数据和考生活动的可追溯性。在此背景下,整合诸如 PoseNet 算法(源自 TensorFlow 模型)等先进技术成为一个关键组成部分。通过利用 PoseNet,该系统能有效识别考生的单人和多人面孔,从而确保考试环节的真实性和完整性。此外,COCO 数据集的加入有助于识别考试环境中的物体,进一步增强了系统监控考生活动的能力。这一平台不仅能确保数据的不变性,还能防止潜在的篡改情况,从而维护考试结果的可信度。通过利用智能合约,拟议的框架不仅简化了考试流程,还提高了透明度和完整性,从而解决了传统考试方法中遇到的固有挑战。这种技术整合的主要优势之一在于,它能够使考试程序现代化,同时在教育评估生态系统中加强信任和问责。通过利用智能合约的力量,教育机构可以减少对数据篡改和舞弊行为的担忧,从而营造一个更加安全可靠的考试环境。此外,区块链技术提供的透明度确保了考试结果的可验证性和可审计性,为利益相关者注入了信心,提高了评估过程的整体可信度。总之,智能合约的采用代表了教育评估领域的范式转变,为应对传统考试方法带来的挑战提供了全面的解决方案。通过采用 PoseNet 和区块链等先进技术,教育机构不仅可以简化考试程序,还能坚持最高的诚信和问责标准。因此,智能合约的整合在塑造在线考试的未来方面具有巨大的潜力,为建立一个更加高效、透明和可信的评估生态系统铺平了道路。
{"title":"ExamGuard: Smart contracts for secure online test","authors":"Mayuri Diwakar Kulkarni,&nbsp;Ashish Awate,&nbsp;Makarand Shahade,&nbsp;Bhushan Nandwalkar","doi":"10.1016/j.is.2024.102485","DOIUrl":"10.1016/j.is.2024.102485","url":null,"abstract":"<div><div>The education sector is currently experiencing profound changes, primarily driven by the widespread adoption of online platforms for conducting examinations. This paper delves into the utilization of smart contracts as a means to revolutionize the monitoring and execution of online examinations, thereby guaranteeing the traceability of evaluation data and examinee activities. In this context, the integration of advanced technologies such as the PoseNet algorithm, derived from the TensorFlow Model, emerges as a pivotal component. By leveraging PoseNet, the system adeptly identifies both single and multiple faces of examinees, thereby ensuring the authenticity and integrity of examination sessions. Moreover, the incorporation of the COCO dataset facilitates the recognition of objects within examination environments, further bolstering the system's capabilities in monitoring examinee activities.of paramount importance is the secure storage of evidence collected during examinations, a task efficiently accomplished through the implementation of the blockchain technology. This platform not only ensures the immutability of data but also safeguards against potential instances of tampering, thereby upholding the credibility of examination results. Through the utilization of smart contracts, the proposed framework not only streamlines the examination process but also instills transparency and integrity, thereby addressing inherent challenges encountered in traditional examination methods. One of the key advantages of this technological integration lies in its ability to modernize examination procedures while concurrently reinforcing trust and accountability within the educational assessment ecosystem. By harnessing the power of smart contracts, educational institutions can mitigate concerns pertaining to data manipulation and malpractice, thereby fostering a more secure and reliable examination environment. Furthermore, the transparency afforded by blockchain technology ensures that examination outcomes are verifiable and auditable, instilling confidence among stakeholders and enhancing the overall credibility of the assessment process. In conclusion, the adoption of smart contracts represents a paradigm shift in the realm of educational assessment, offering a comprehensive solution to the challenges posed by traditional examination methods. By embracing advanced technologies such as PoseNet and blockchain, educational institutions can not only streamline examination procedures but also uphold the highest standards of integrity and accountability. As such, the integration of smart contracts holds immense potential in shaping the future of online examinations, paving the way for a more efficient, transparent, and trustworthy assessment ecosystem.</div></div>","PeriodicalId":50363,"journal":{"name":"Information Systems","volume":"128 ","pages":"Article 102485"},"PeriodicalIF":3.0,"publicationDate":"2024-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142537887","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Explaining results of path queries on graphs: Single-path results for context-free path queries 解释图上路径查询的结果:无上下文路径查询的单路径结果
IF 3 2区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-10-16 DOI: 10.1016/j.is.2024.102475
Jelle Hellings
Many graph query languages use, at their core, path queries that yield node pairs (m,n) that are connected by a path of interest. For the end-user, such node pairs only give limited insight as to why this result is obtained, as the pair does not directly identify the underlying path of interest.
In this paper, we propose the single-path semantics to address this limitation of path queries. Under single-path semantics, path queries evaluate to a single path connecting nodes m and n and that satisfies the conditions of the query. To put our proposal in practice, we provide an efficient algorithm for evaluating context-free path queries using the single-path semantics. Additionally, we perform a short evaluation of our techniques that shows that the single-path semantics is practically feasible, even when query results grow large.
In addition, we explore the formal relationship between the single-path semantics we propose the problem of finding the shortest string in the intersection of a regular language (representing a graph) and a context-free language (representing a path query). As our formal results show, there is a distinction between the complexity of the single-path semantics for queries that use a single edge label and queries that use multiple edge labels: for queries that use a single edge label, the length of the shortest path is linearly upper bounded by the number of nodes in the graph; whereas for queries that use multiple edge labels, the length of the shortest path has a worst-case quadratic lower bound.
许多图查询语言的核心都是使用路径查询,这种查询会产生由感兴趣的路径连接起来的节点对(m,n)。对于最终用户来说,这些节点对只能有限地说明为什么会得到这样的结果,因为这些节点对并不能直接确定感兴趣的底层路径。在本文中,我们提出了单路径语义来解决路径查询的这一局限性。在单路径语义下,路径查询只评估连接节点 m 和 n 且满足查询条件的一条路径。为了将我们的建议付诸实践,我们提供了一种使用单路径语义评估无上下文路径查询的高效算法。此外,我们还对我们的技术进行了简短评估,结果表明单路径语义在实践中是可行的,即使查询结果变得很大。此外,我们还探讨了单路径语义与我们提出的在正则语言(代表图)和无上下文语言(代表路径查询)的交集中寻找最短字符串问题之间的形式关系。正如我们的形式结果所示,单路径语义对于使用单个边标签的查询和使用多个边标签的查询的复杂性是有区别的:对于使用单个边标签的查询,最短路径的长度与图中节点的数量成线性上界;而对于使用多个边标签的查询,最短路径的长度在最坏情况下有二次下界。
{"title":"Explaining results of path queries on graphs: Single-path results for context-free path queries","authors":"Jelle Hellings","doi":"10.1016/j.is.2024.102475","DOIUrl":"10.1016/j.is.2024.102475","url":null,"abstract":"<div><div>Many graph query languages use, at their core, <em>path queries</em> that yield node pairs <span><math><mrow><mo>(</mo><mi>m</mi><mo>,</mo><mi>n</mi><mo>)</mo></mrow></math></span> that are connected by a path of interest. For the end-user, such node pairs only give limited insight as to <em>why</em> this result is obtained, as the pair does not directly identify the underlying path of interest.</div><div>In this paper, we propose the <em>single-path semantics</em> to address this limitation of path queries. Under single-path semantics, path queries evaluate to a single path connecting nodes <span><math><mi>m</mi></math></span> and <span><math><mi>n</mi></math></span> and that satisfies the conditions of the query. To put our proposal in practice, we provide an efficient algorithm for evaluating <em>context-free path queries</em> using the single-path semantics. Additionally, we perform a short evaluation of our techniques that shows that the single-path semantics is practically feasible, even when query results grow large.</div><div>In addition, we explore the formal relationship between the single-path semantics we propose the problem of finding the <em>shortest string</em> in the intersection of a regular language (representing a graph) and a context-free language (representing a path query). As our formal results show, there is a distinction between the complexity of the single-path semantics for queries that use a single edge label and queries that use multiple edge labels: for queries that use a single edge label, the length of the shortest path is <em>linearly upper bounded</em> by the number of nodes in the graph; whereas for queries that use multiple edge labels, the length of the shortest path has a worst-case <em>quadratic lower bound</em>.</div></div>","PeriodicalId":50363,"journal":{"name":"Information Systems","volume":"128 ","pages":"Article 102475"},"PeriodicalIF":3.0,"publicationDate":"2024-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142529774","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Hands-on analysis of using large language models for the auto evaluation of programming assignments 使用大型语言模型自动评估编程作业的实践分析
IF 3 2区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-10-15 DOI: 10.1016/j.is.2024.102473
Kareem Mohamed , Mina Yousef , Walaa Medhat , Ensaf Hussein Mohamed , Ghada Khoriba , Tamer Arafa
The increasing adoption of programming education necessitates efficient and accurate methods for evaluating students’ coding assignments. Traditional manual grading is time-consuming, often inconsistent, and prone to subjective biases. This paper explores the application of large language models (LLMs) for the automated evaluation of programming assignments. LLMs can use advanced natural language processing capabilities to assess code quality, functionality, and adherence to best practices, providing detailed feedback and grades. We demonstrate the effectiveness of LLMs through experiments comparing their performance with human evaluators across various programming tasks. Our study evaluates the performance of several LLMs for automated grading. Gemini 1.5 Pro achieves an exact match accuracy of 86% and a ±1 accuracy of 98%. GPT-4o also demonstrates strong performance, with exact match and ±1 accuracies of 69% and 97%, respectively. Both models correlate highly with human evaluations, indicating their potential for reliable automated grading. However, models such as Llama 3 70B and Mixtral 8 × 7B exhibit low accuracy and alignment with human grading, particularly in problem-solving tasks. These findings suggest that advanced LLMs are instrumental in scalable, automated educational assessment. Additionally, LLMs enhance the learning experience by offering personalized, instant feedback, fostering an iterative learning process. The findings suggest that LLMs could play a pivotal role in the future of programming education, ensuring scalability and consistency in evaluation.
随着编程教育的日益普及,我们需要高效、准确的方法来评估学生的编码作业。传统的人工评分费时费力,往往不一致,而且容易产生主观偏见。本文探讨了大语言模型(LLM)在编程作业自动评估中的应用。LLM 可以使用先进的自然语言处理能力来评估代码质量、功能和是否符合最佳实践,并提供详细的反馈和评分。我们通过比较 LLM 与人类评估员在各种编程任务中的表现,证明了 LLM 的有效性。我们的研究评估了几种用于自动分级的 LLM 的性能。Gemini 1.5 Pro 的精确匹配准确率为 86%,±1 准确率为 98%。GPT-4o 也表现出强劲的性能,精确匹配准确率和 ±1 准确率分别为 69% 和 97%。这两个模型都与人类评估结果高度相关,表明它们具有可靠的自动分级潜力。然而,Llama 3 70B 和 Mixtral 8 × 7B 等模型的准确度和与人类分级的一致性较低,尤其是在解决问题的任务中。这些研究结果表明,先进的 LLM 在可扩展的自动教育评估中具有重要作用。此外,LLM 还能提供个性化的即时反馈,促进迭代学习过程,从而增强学习体验。研究结果表明,LLM 可在未来的编程教育中发挥关键作用,确保评估的可扩展性和一致性。
{"title":"Hands-on analysis of using large language models for the auto evaluation of programming assignments","authors":"Kareem Mohamed ,&nbsp;Mina Yousef ,&nbsp;Walaa Medhat ,&nbsp;Ensaf Hussein Mohamed ,&nbsp;Ghada Khoriba ,&nbsp;Tamer Arafa","doi":"10.1016/j.is.2024.102473","DOIUrl":"10.1016/j.is.2024.102473","url":null,"abstract":"<div><div>The increasing adoption of programming education necessitates efficient and accurate methods for evaluating students’ coding assignments. Traditional manual grading is time-consuming, often inconsistent, and prone to subjective biases. This paper explores the application of large language models (LLMs) for the automated evaluation of programming assignments. LLMs can use advanced natural language processing capabilities to assess code quality, functionality, and adherence to best practices, providing detailed feedback and grades. We demonstrate the effectiveness of LLMs through experiments comparing their performance with human evaluators across various programming tasks. Our study evaluates the performance of several LLMs for automated grading. Gemini 1.5 Pro achieves an exact match accuracy of 86% and a <span><math><mrow><mo>±</mo><mn>1</mn></mrow></math></span> accuracy of 98%. GPT-4o also demonstrates strong performance, with exact match and <span><math><mrow><mo>±</mo><mn>1</mn></mrow></math></span> accuracies of 69% and 97%, respectively. Both models correlate highly with human evaluations, indicating their potential for reliable automated grading. However, models such as Llama 3 70B and Mixtral 8 <span><math><mo>×</mo></math></span> 7B exhibit low accuracy and alignment with human grading, particularly in problem-solving tasks. These findings suggest that advanced LLMs are instrumental in scalable, automated educational assessment. Additionally, LLMs enhance the learning experience by offering personalized, instant feedback, fostering an iterative learning process. The findings suggest that LLMs could play a pivotal role in the future of programming education, ensuring scalability and consistency in evaluation.</div></div>","PeriodicalId":50363,"journal":{"name":"Information Systems","volume":"128 ","pages":"Article 102473"},"PeriodicalIF":3.0,"publicationDate":"2024-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142529772","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Influence maximization based on discrete particle swarm optimization on multilayer network 基于离散粒子群优化的多层网络影响最大化
IF 3 2区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-10-09 DOI: 10.1016/j.is.2024.102466
Saiwei Wang , Wei Liu , Ling Chen , Shijie Zong
Influence maximization (IM) aims to strategically select influential users to maximize information propagation in social networks. Most of the existing studies focus on IM in single-layer networks. However, we have observed that individuals often engage in multiple social platforms to fulfill various social needs. To make better use of this observation, we consider an extended problem of how to maximize influence spread in multilayer networks. The Multilayer Influence Maximization (MLIM) problem is different from the IM problem because information propagation behaves differently in multilayer networks compared to single-layer networks: users influenced on one layer may trigger the propagation of information on another layer. Our work successfully models the information propagation process as a Multilayer Independent Cascade model in multilayer networks. Based on the characteristics of this model, we introduce an approximation function called Multilayer Expected Diffusion Value (MLEDV) for it. However, the NP-hardness of the MLIM problem has posed significant challenges to our work. To tackle the issue, we devise a novel algorithm based on Discrete Particle Swarm Optimization. Our algorithm consists of two stages: 1) the candidate node selection, where we devise a novel centrality metric called Random connectivity Centrality to select candidate nodes, which assesses the importance of nodes from a connectivity perspective. 2)the seed selection, where we employ a discrete particle swarm algorithm to find seed nodes from the candidate nodes. We use MLEDV as a fitness function to measure the spreading power of candidate solutions in our algorithm. Additionally, we introduce a Neighborhood Optimization strategy to increase the convergence of the algorithm. We conduct experiments on four real-world networks and two self-built networks and demonstrate that our algorithms are effective for the MLIM problem.
影响力最大化(IM)旨在战略性地选择有影响力的用户,以最大限度地扩大社交网络中的信息传播。现有研究大多关注单层网络中的 IM。然而,我们注意到,个人通常会参与多个社交平台,以满足各种社交需求。为了更好地利用这一观察结果,我们考虑了如何在多层网络中实现影响力传播最大化的扩展问题。多层影响力最大化(MLIM)问题与 IM 问题不同,因为信息传播在多层网络中的表现与单层网络不同:在一层受到影响的用户可能会引发信息在另一层的传播。我们的工作成功地将信息传播过程建模为多层网络中的多层独立级联模型。根据该模型的特点,我们为其引入了一个名为多层期望扩散值(MLEDV)的近似函数。然而,MLIM 问题的 NP 难度给我们的工作带来了巨大挑战。为了解决这个问题,我们设计了一种基于离散粒子群优化的新算法。我们的算法包括两个阶段:1) 候选节点选择,我们设计了一种名为 "随机连接中心性 "的新型中心性度量来选择候选节点,该度量从连接性角度评估节点的重要性。2)种子选择,我们采用离散粒子群算法从候选节点中寻找种子节点。在算法中,我们使用 MLEDV 作为适配函数来衡量候选方案的传播能力。此外,我们还引入了邻域优化策略,以提高算法的收敛性。我们在四个真实世界网络和两个自建网络上进行了实验,证明我们的算法对 MLIM 问题是有效的。
{"title":"Influence maximization based on discrete particle swarm optimization on multilayer network","authors":"Saiwei Wang ,&nbsp;Wei Liu ,&nbsp;Ling Chen ,&nbsp;Shijie Zong","doi":"10.1016/j.is.2024.102466","DOIUrl":"10.1016/j.is.2024.102466","url":null,"abstract":"<div><div>Influence maximization (IM) aims to strategically select influential users to maximize information propagation in social networks. Most of the existing studies focus on IM in single-layer networks. However, we have observed that individuals often engage in multiple social platforms to fulfill various social needs. To make better use of this observation, we consider an extended problem of how to maximize influence spread in multilayer networks. The Multilayer Influence Maximization (MLIM) problem is different from the IM problem because information propagation behaves differently in multilayer networks compared to single-layer networks: users influenced on one layer may trigger the propagation of information on another layer. Our work successfully models the information propagation process as a Multilayer Independent Cascade model in multilayer networks. Based on the characteristics of this model, we introduce an approximation function called Multilayer Expected Diffusion Value (MLEDV) for it. However, the NP-hardness of the MLIM problem has posed significant challenges to our work. To tackle the issue, we devise a novel algorithm based on Discrete Particle Swarm Optimization. Our algorithm consists of two stages: 1) the candidate node selection, where we devise a novel centrality metric called Random connectivity Centrality to select candidate nodes, which assesses the importance of nodes from a connectivity perspective. 2)the seed selection, where we employ a discrete particle swarm algorithm to find seed nodes from the candidate nodes. We use MLEDV as a fitness function to measure the spreading power of candidate solutions in our algorithm. Additionally, we introduce a Neighborhood Optimization strategy to increase the convergence of the algorithm. We conduct experiments on four real-world networks and two self-built networks and demonstrate that our algorithms are effective for the MLIM problem.</div></div>","PeriodicalId":50363,"journal":{"name":"Information Systems","volume":"127 ","pages":"Article 102466"},"PeriodicalIF":3.0,"publicationDate":"2024-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142420321","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Runtime integration of machine learning and simulation for business processes: Time and decision mining predictions 业务流程中机器学习与模拟的运行时集成:时间和决策挖掘预测
IF 3 2区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-10-09 DOI: 10.1016/j.is.2024.102472
Francesca Meneghello , Chiara Di Francescomarino , Chiara Ghidini , Massimiliano Ronzani
Recent research in Computer Science has investigated the use of Deep Learning (DL) techniques to complement outcomes or decisions within a Discrete Event Simulation (DES) model. The main idea of this combination is to maintain a white box simulation model complement it with information provided by DL models to overcome the unrealistic or oversimplified assumptions of traditional DESs. State-of-the-art techniques in BPM combine Deep Learning and Discrete Event Simulation in a post-integration fashion: first an entire simulation is performed, and then a DL model is used to add waiting times and processing times to the events produced by the simulation model.
In this paper, we aim at taking a step further by introducing Rims (Runtime Integration of Machine Learning and Simulation). Instead of complementing the outcome of a complete simulation with the results of predictions a posteriori, Rims provides a tight integration of the predictions of the DL model at runtime during the simulation. This runtime-integration enables us to fully exploit the specific predictions while respecting simulation execution, thus enhancing the performance of the overall system both w.r.t. the single techniques (Business Process Simulation and DL) separately and the post-integration approach. In particular, the runtime integration ensures the accuracy of intercase features for time prediction, such as the number of ongoing traces at a given time, by calculating them during directly the simulation, where all traces are executed in parallel. Additionally, it allows for the incorporation of online queue information in the DL model and enables the integration of other predictive models into the simulator to enhance decision point management within the process model. These enhancements improve the performance of Rims in accurately simulating the real process in terms of control flow, as well as in terms of time and congestion dimensions. Especially in process scenarios with significant congestion – when a limited availability of resources leads to significant event queues for their allocation – the ability of Rims to use queue features to predict waiting times allows it to surpass the state-of-the-art. We evaluated our approach with real-world and synthetic event logs, using various metrics to assess the simulation model’s quality in terms of control-flow, time, and congestion dimensions.
计算机科学领域的最新研究调查了深度学习(DL)技术的使用情况,以补充离散事件仿真(DES)模型中的结果或决策。这种组合的主要理念是,利用深度学习模型提供的信息对白盒仿真模型进行补充,以克服传统 DES 不切实际或过于简化的假设。BPM 领域最先进的技术以一种后整合的方式将深度学习和离散事件仿真结合在一起:首先执行整个仿真,然后使用 DL 模型为仿真模型生成的事件添加等待时间和处理时间。Rims 不是用事后预测的结果来补充完整的仿真结果,而是在仿真过程中的运行时对 DL 模型的预测进行紧密集成。这种运行时集成使我们能够在尊重仿真执行的前提下充分利用特定的预测结果,从而提高整个系统的性能,无论是与单独的技术(业务流程仿真和 DL)相比,还是与后集成方法相比,都是如此。特别是,运行时集成确保了用于时间预测的案例间特征的准确性,如在给定时间内正在进行的跟踪数量,方法是在所有跟踪都并行执行的模拟过程中直接计算这些特征。此外,它还允许在 DL 模型中纳入在线队列信息,并允许将其他预测模型集成到模拟器中,以加强流程模型中的决策点管理。这些改进提高了 Rims 在控制流、时间和拥塞维度方面精确模拟实际流程的性能。特别是在拥堵严重的流程场景中,当有限的可用资源导致大量事件排队等待分配时,Rims 利用队列特征预测等待时间的能力使其超越了最先进的技术。我们利用真实世界和合成事件日志对我们的方法进行了评估,并使用各种指标来评估仿真模型在控制流、时间和拥塞方面的质量。
{"title":"Runtime integration of machine learning and simulation for business processes: Time and decision mining predictions","authors":"Francesca Meneghello ,&nbsp;Chiara Di Francescomarino ,&nbsp;Chiara Ghidini ,&nbsp;Massimiliano Ronzani","doi":"10.1016/j.is.2024.102472","DOIUrl":"10.1016/j.is.2024.102472","url":null,"abstract":"<div><div>Recent research in Computer Science has investigated the use of Deep Learning (DL) techniques to complement outcomes or decisions within a Discrete Event Simulation (DES) model. The main idea of this combination is to maintain a white box simulation model complement it with information provided by DL models to overcome the unrealistic or oversimplified assumptions of traditional DESs. State-of-the-art techniques in BPM combine Deep Learning and Discrete Event Simulation in a post-integration fashion: first an entire simulation is performed, and then a DL model is used to add waiting times and processing times to the events produced by the simulation model.</div><div>In this paper, we aim at taking a step further by introducing <span>Rims</span> (Runtime Integration of Machine Learning and Simulation). Instead of complementing the outcome of a complete simulation with the results of predictions a posteriori, <span>Rims</span> provides a tight integration of the predictions of the DL model <em>at runtime</em> during the simulation. This runtime-integration enables us to fully exploit the specific predictions while respecting simulation execution, thus enhancing the performance of the overall system both w.r.t. the single techniques (Business Process Simulation and DL) separately and the post-integration approach. In particular, the runtime integration ensures the accuracy of intercase features for time prediction, such as the number of ongoing traces at a given time, by calculating them during directly the simulation, where all traces are executed in parallel. Additionally, it allows for the incorporation of online queue information in the DL model and enables the integration of other predictive models into the simulator to enhance decision point management within the process model. These enhancements improve the performance of <span>Rims</span> in accurately simulating the real process in terms of control flow, as well as in terms of time and congestion dimensions. Especially in process scenarios with significant congestion – when a limited availability of resources leads to significant event queues for their allocation – the ability of <span>Rims</span> to use queue features to predict waiting times allows it to surpass the state-of-the-art. We evaluated our approach with real-world and synthetic event logs, using various metrics to assess the simulation model’s quality in terms of control-flow, time, and congestion dimensions.</div></div>","PeriodicalId":50363,"journal":{"name":"Information Systems","volume":"128 ","pages":"Article 102472"},"PeriodicalIF":3.0,"publicationDate":"2024-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142442017","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Information Systems
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1