首页 > 最新文献

Applied Computing Review最新文献

英文 中文
Real-life Performance of Fairness Interventions - Introducing A New Benchmarking Dataset for Fair ML 公平干预的现实表现——为公平ML引入一个新的基准数据集
IF 1 Q4 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2023-03-27 DOI: 10.1145/3555776.3577634
Daphne Lenders, T. Calders
Some researchers evaluate their fair Machine Learning (ML) algorithms by simulating data with a fair and biased version of its labels. The fair labels reflect what labels individuals deserve, while the biased labels reflect labels obtained through a biased decision process. Given such data, fair algorithms are evaluated by measuring how well they can predict the fair labels, after being trained on the biased ones. The big problem with these approaches is, that they are based on simulated data, which is unlikely to capture the full complexity and noise of real-life decision problems. In this paper, we show how we created a new, more realistic dataset with both fair and biased labels. For this purpose, we started with an existing dataset containing information about high school students and whether they passed an exam or not. Through a human experiment, where participants estimated the school performance given some description of these students, we collect a biased version of these labels. We show how this new dataset can be used to evaluate fair ML algorithms, and how some fairness interventions, that perform well in the traditional evaluation schemes, do not necessarily perform well with respect to the unbiased labels in our dataset, leading to new insights into the performance of debiasing techniques.
一些研究人员通过用公平和有偏见的标签版本模拟数据来评估他们公平的机器学习(ML)算法。公平的标签反映了个人应得的标签,而有偏见的标签反映了通过有偏见的决策过程获得的标签。给定这样的数据,公平算法是通过测量它们在对有偏见的标签进行训练后预测公平标签的程度来评估的。这些方法的一个大问题是,它们基于模拟数据,不太可能捕捉到现实生活中决策问题的全部复杂性和噪音。在本文中,我们展示了如何创建一个新的、更真实的数据集,其中包含公平和有偏见的标签。为此,我们从一个现有的数据集开始,其中包含有关高中生的信息,以及他们是否通过了考试。通过一项人体实验,参与者根据对这些学生的一些描述来估计他们在学校的表现,我们收集了这些标签的偏见版本。我们展示了这个新数据集如何用于评估公平的机器学习算法,以及一些在传统评估方案中表现良好的公平干预措施如何不一定在我们的数据集中表现良好,从而对去偏技术的性能产生了新的见解。
{"title":"Real-life Performance of Fairness Interventions - Introducing A New Benchmarking Dataset for Fair ML","authors":"Daphne Lenders, T. Calders","doi":"10.1145/3555776.3577634","DOIUrl":"https://doi.org/10.1145/3555776.3577634","url":null,"abstract":"Some researchers evaluate their fair Machine Learning (ML) algorithms by simulating data with a fair and biased version of its labels. The fair labels reflect what labels individuals deserve, while the biased labels reflect labels obtained through a biased decision process. Given such data, fair algorithms are evaluated by measuring how well they can predict the fair labels, after being trained on the biased ones. The big problem with these approaches is, that they are based on simulated data, which is unlikely to capture the full complexity and noise of real-life decision problems. In this paper, we show how we created a new, more realistic dataset with both fair and biased labels. For this purpose, we started with an existing dataset containing information about high school students and whether they passed an exam or not. Through a human experiment, where participants estimated the school performance given some description of these students, we collect a biased version of these labels. We show how this new dataset can be used to evaluate fair ML algorithms, and how some fairness interventions, that perform well in the traditional evaluation schemes, do not necessarily perform well with respect to the unbiased labels in our dataset, leading to new insights into the performance of debiasing techniques.","PeriodicalId":42971,"journal":{"name":"Applied Computing Review","volume":"89 1","pages":""},"PeriodicalIF":1.0,"publicationDate":"2023-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80350170","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Exploring alternatives of Complex Event Processing execution engines in demanding cases 探索复杂事件处理执行引擎在苛刻情况下的替代方案
IF 1 Q4 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2023-03-27 DOI: 10.1145/3555776.3577734
Styliani Kyrama, A. Gounaris
Complex Event Processing (CEP) is a mature technology providing particularly efficient solutions for pattern detection in streaming settings. Nevertheless, even the most advanced CEP engines struggle to deal with cases when the number of pattern matches grows exponentially, e.g., when the queries involve Kleene operators to detect trends. In this work, we present an overview of state-of-the-art CEP engines used for pattern detection, focusing also on systems that discover demanding event trends. The main contribution lies in the comparison of existing CEP engine alternatives and the proposal of a novel hash-endowed automata-based lazy hybrid execution engine, called SASEXT, that undertakes the processing of pattern queries involving Kleene patterns. Our proposal is orders of magnitude faster than existing solutions.
复杂事件处理(CEP)是一种成熟的技术,为流设置中的模式检测提供了特别有效的解决方案。然而,即使是最先进的CEP引擎也难以处理模式匹配数量呈指数增长的情况,例如,当查询涉及Kleene操作符来检测趋势时。在这项工作中,我们概述了用于模式检测的最先进的CEP引擎,还关注了发现苛刻事件趋势的系统。本文的主要贡献在于比较了现有的CEP引擎替代方案,并提出了一种新的基于哈希的自动机的惰性混合执行引擎(称为SASEXT),该引擎负责处理涉及Kleene模式的模式查询。我们的建议比现有的解决方案快几个数量级。
{"title":"Exploring alternatives of Complex Event Processing execution engines in demanding cases","authors":"Styliani Kyrama, A. Gounaris","doi":"10.1145/3555776.3577734","DOIUrl":"https://doi.org/10.1145/3555776.3577734","url":null,"abstract":"Complex Event Processing (CEP) is a mature technology providing particularly efficient solutions for pattern detection in streaming settings. Nevertheless, even the most advanced CEP engines struggle to deal with cases when the number of pattern matches grows exponentially, e.g., when the queries involve Kleene operators to detect trends. In this work, we present an overview of state-of-the-art CEP engines used for pattern detection, focusing also on systems that discover demanding event trends. The main contribution lies in the comparison of existing CEP engine alternatives and the proposal of a novel hash-endowed automata-based lazy hybrid execution engine, called SASEXT, that undertakes the processing of pattern queries involving Kleene patterns. Our proposal is orders of magnitude faster than existing solutions.","PeriodicalId":42971,"journal":{"name":"Applied Computing Review","volume":"4 1","pages":""},"PeriodicalIF":1.0,"publicationDate":"2023-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78875002","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Zero-Shot Taxonomy Mapping for Document Classification 用于文档分类的零采样分类法映射
IF 1 Q4 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2023-03-27 DOI: 10.1145/3555776.3577653
L. Bongiovanni, Luca Bruno, Fabrizio Dominici, Giuseppe Rizzo
Classification of documents according to a custom internal hierarchical taxonomy is a common problem for many organizations that deal with textual data. Approaches aimed to address this challenge are, for the vast majority, supervised methods, which have the advantage of producing good results on specific datasets, but the major drawbacks of requiring an entire corpus of annotated documents, and the resulting models are not directly applicable to a different taxonomy. In this paper, we aim to contribute to this important issue, by proposing a method to classify text according to a custom hierarchical taxonomy entirely without the need of labelled data. The idea is to first leverage the semantic information encoded into pre-trained Deep Language Models to assigned a prior relevance score for each label of the taxonomy using zero-shot, and secondly take advantage of the hierarchical structure to reinforce this prior belief. Experiments are conducted on three hierarchically annotated datasets: WebOfScience, DBpedia Extracts and Amazon Product Reviews, which are very diverse in the type of language adopted and have taxonomy depth of two and three levels. We first compare different zero-shot methods, and then we show that our hierarchy-aware approach substantially improves results across every dataset.
对于许多处理文本数据的组织来说,根据自定义的内部分层分类法对文档进行分类是一个常见问题。对于大多数人来说,旨在解决这一挑战的方法是监督方法,它具有在特定数据集上产生良好结果的优势,但主要缺点是需要整个带注释的文档语料库,并且所得到的模型不能直接适用于不同的分类法。在本文中,我们的目标是通过提出一种完全不需要标记数据就可以根据自定义层次分类法对文本进行分类的方法来解决这个重要问题。其思想是首先利用编码到预训练深度语言模型中的语义信息,使用zero-shot为分类法的每个标签分配一个先验相关分数,然后利用分层结构来强化这种先验信念。实验在WebOfScience、DBpedia extract和Amazon Product Reviews三个分层标注的数据集上进行,这三个数据集采用的语言类型非常多样,分类深度分别为二级和三级。我们首先比较了不同的零射击方法,然后我们展示了我们的层次感知方法大大提高了每个数据集的结果。
{"title":"Zero-Shot Taxonomy Mapping for Document Classification","authors":"L. Bongiovanni, Luca Bruno, Fabrizio Dominici, Giuseppe Rizzo","doi":"10.1145/3555776.3577653","DOIUrl":"https://doi.org/10.1145/3555776.3577653","url":null,"abstract":"Classification of documents according to a custom internal hierarchical taxonomy is a common problem for many organizations that deal with textual data. Approaches aimed to address this challenge are, for the vast majority, supervised methods, which have the advantage of producing good results on specific datasets, but the major drawbacks of requiring an entire corpus of annotated documents, and the resulting models are not directly applicable to a different taxonomy. In this paper, we aim to contribute to this important issue, by proposing a method to classify text according to a custom hierarchical taxonomy entirely without the need of labelled data. The idea is to first leverage the semantic information encoded into pre-trained Deep Language Models to assigned a prior relevance score for each label of the taxonomy using zero-shot, and secondly take advantage of the hierarchical structure to reinforce this prior belief. Experiments are conducted on three hierarchically annotated datasets: WebOfScience, DBpedia Extracts and Amazon Product Reviews, which are very diverse in the type of language adopted and have taxonomy depth of two and three levels. We first compare different zero-shot methods, and then we show that our hierarchy-aware approach substantially improves results across every dataset.","PeriodicalId":42971,"journal":{"name":"Applied Computing Review","volume":"86 1","pages":""},"PeriodicalIF":1.0,"publicationDate":"2023-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79385181","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DEDACS: Decentralized and dynamic access control for smart contracts in a policy-based manner dedecs:基于策略的智能合约去中心化动态访问控制
IF 1 Q4 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2023-03-27 DOI: 10.1145/3555776.3577676
Kristof Jannes, Vincent Reniers, Wouter Lenaerts, B. Lagaisse, W. Joosen
Distributed Ledger Technology (DLTs) or blockchains have been steadily emerging and providing innovation in the past decade for several use cases, ranging from financial networks, to notarization, or trustworthy execution via smart contracts. DLTs are enticing due to their properties of decentralization, non-repudiation, and auditability (transparency). These properties are of high potential to access control systems that can be implemented on-chain, and are executed without infringement and full transparency. While it remains uncertain which use cases will truly turn out to be viable, many use cases such as financial transactions can benefit from integrating certain restrictions via access control on the blockchain. In addition, smart contracts may in the future present security risks that are currently yet unknown. As a solution, access control policies can provide flexibility in the execution flow when adopted by smart contracts. In this paper, we present our DEDACS architecture which provides decentralized and dynamic access control for smart contracts in a policy-based manner. Our access control is expressive as it features policies, and dynamic as the environment or users can be changed, or alternative policies can be assigned to smart contracts. DEDACS ensures that our access control preserves the desired properties of decentralization and transparency, while aiming to keep the costs involved as minimal as possible. We have evaluated DEDACS in the context of a Uniswap token-exchange platform, in which we evaluated the costs related to (i) the introduced overhead at deployment time and (ii) the operational overhead cost. DEDACS introduces a relative overhead of on average 52% at deployment time, and an operational overhead between 52% and 80% depending on the chosen policy and its complexity.
分布式账本技术(dlt)或区块链在过去十年中稳步出现,并为几个用例提供了创新,从金融网络到公证,或通过智能合约进行可信执行。分布式账本技术由于其去中心化、不可否认性和可审计性(透明度)的特性而具有吸引力。这些属性对于可以在链上实现的访问控制系统具有很高的潜力,并且执行时不会侵权且完全透明。虽然仍不确定哪些用例将真正可行,但许多用例(如金融事务)可以通过在区块链上的访问控制集成某些限制而受益。此外,智能合约未来可能会出现目前未知的安全风险。作为一种解决方案,当智能合约采用访问控制策略时,可以在执行流中提供灵活性。在本文中,我们提出了我们的dedecs架构,它以一种基于策略的方式为智能合约提供去中心化和动态的访问控制。我们的访问控制具有表现力,因为它具有策略,并且是动态的,因为环境或用户可以更改,或者可以将替代策略分配给智能合约。dedecs确保我们的访问控制保留了去中心化和透明度的期望属性,同时旨在尽可能降低所涉及的成本。我们在Uniswap令牌交换平台的背景下评估了DEDACS,其中我们评估了与(i)部署时引入的开销和(ii)运营开销相关的成本。DEDACS在部署时的相对开销平均为52%,根据所选择的策略及其复杂性,操作开销在52%到80%之间。
{"title":"DEDACS: Decentralized and dynamic access control for smart contracts in a policy-based manner","authors":"Kristof Jannes, Vincent Reniers, Wouter Lenaerts, B. Lagaisse, W. Joosen","doi":"10.1145/3555776.3577676","DOIUrl":"https://doi.org/10.1145/3555776.3577676","url":null,"abstract":"Distributed Ledger Technology (DLTs) or blockchains have been steadily emerging and providing innovation in the past decade for several use cases, ranging from financial networks, to notarization, or trustworthy execution via smart contracts. DLTs are enticing due to their properties of decentralization, non-repudiation, and auditability (transparency). These properties are of high potential to access control systems that can be implemented on-chain, and are executed without infringement and full transparency. While it remains uncertain which use cases will truly turn out to be viable, many use cases such as financial transactions can benefit from integrating certain restrictions via access control on the blockchain. In addition, smart contracts may in the future present security risks that are currently yet unknown. As a solution, access control policies can provide flexibility in the execution flow when adopted by smart contracts. In this paper, we present our DEDACS architecture which provides decentralized and dynamic access control for smart contracts in a policy-based manner. Our access control is expressive as it features policies, and dynamic as the environment or users can be changed, or alternative policies can be assigned to smart contracts. DEDACS ensures that our access control preserves the desired properties of decentralization and transparency, while aiming to keep the costs involved as minimal as possible. We have evaluated DEDACS in the context of a Uniswap token-exchange platform, in which we evaluated the costs related to (i) the introduced overhead at deployment time and (ii) the operational overhead cost. DEDACS introduces a relative overhead of on average 52% at deployment time, and an operational overhead between 52% and 80% depending on the chosen policy and its complexity.","PeriodicalId":42971,"journal":{"name":"Applied Computing Review","volume":"31 1","pages":""},"PeriodicalIF":1.0,"publicationDate":"2023-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75192893","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Aging and rejuvenating strategies for fading windows in multi-label classification on data streams 数据流多标签分类中衰落窗口的老化与恢复策略
IF 1 Q4 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2023-03-27 DOI: 10.1145/3555776.3577625
M. Roseberry, S. Džeroski, A. Bifet, Alberto Cano
Combining the challenges of streaming data and multi-label learning, the task of mining a drifting, multi-label data stream requires methods that can accurately predict labelsets, adapt to various types of concept drift and run fast enough to process each data point before the next arrives. To achieve greater accuracy, many multi-label algorithms use computationally expensive techniques, such as multiple adaptive windows, with little concern for runtime and memory complexity. We present Aging and Rejuvenating kNN (ARkNN) which uses simple resources and efficient strategies to weight instances based on age, predictive performance, and similarity to the incoming data. We break down ARkNN into its component strategies to show the impact of each and experimentally compare ARkNN to seven state-of-the-art methods for learning from multi-label data streams. We demonstrate that it is possible to achieve competitive performance in multi-label classification on streams without sacrificing runtime and memory use, and without using complex and computationally expensive dual memory strategies.
结合流数据和多标签学习的挑战,挖掘漂移、多标签数据流的任务需要能够准确预测标签集的方法,适应各种类型的概念漂移,并且运行速度足够快,以便在下一个数据点到来之前处理每个数据点。为了获得更高的准确性,许多多标签算法使用计算成本较高的技术,例如多个自适应窗口,很少考虑运行时和内存复杂性。我们提出了老化和恢复kNN (ARkNN),它使用简单的资源和有效的策略来基于年龄、预测性能和与传入数据的相似性来加权实例。我们将ARkNN分解为其组成策略,以显示每个策略的影响,并通过实验将ARkNN与七个最先进的多标签数据流学习方法进行比较。我们证明,在不牺牲运行时和内存使用的情况下,在流上实现多标签分类的竞争性能是可能的,并且不使用复杂和计算昂贵的双内存策略。
{"title":"Aging and rejuvenating strategies for fading windows in multi-label classification on data streams","authors":"M. Roseberry, S. Džeroski, A. Bifet, Alberto Cano","doi":"10.1145/3555776.3577625","DOIUrl":"https://doi.org/10.1145/3555776.3577625","url":null,"abstract":"Combining the challenges of streaming data and multi-label learning, the task of mining a drifting, multi-label data stream requires methods that can accurately predict labelsets, adapt to various types of concept drift and run fast enough to process each data point before the next arrives. To achieve greater accuracy, many multi-label algorithms use computationally expensive techniques, such as multiple adaptive windows, with little concern for runtime and memory complexity. We present Aging and Rejuvenating kNN (ARkNN) which uses simple resources and efficient strategies to weight instances based on age, predictive performance, and similarity to the incoming data. We break down ARkNN into its component strategies to show the impact of each and experimentally compare ARkNN to seven state-of-the-art methods for learning from multi-label data streams. We demonstrate that it is possible to achieve competitive performance in multi-label classification on streams without sacrificing runtime and memory use, and without using complex and computationally expensive dual memory strategies.","PeriodicalId":42971,"journal":{"name":"Applied Computing Review","volume":"57 1","pages":""},"PeriodicalIF":1.0,"publicationDate":"2023-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76923236","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A Biomedical Entity Extraction Pipeline for Oncology Health Records in Portuguese 葡萄牙语肿瘤健康记录的生物医学实体提取管道
IF 1 Q4 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2023-03-27 DOI: 10.1145/3555776.3578577
Hugo Sousa, Arian Pasquali, Alípio Jorge, Catarina Sousa Santos, M'ario Amorim Lopes
Textual health records of cancer patients are usually protracted and highly unstructured, making it very time-consuming for health professionals to get a complete overview of the patient's therapeutic course. As such limitations can lead to suboptimal and/or inefficient treatment procedures, healthcare providers would greatly benefit from a system that effectively summarizes the information of those records. With the advent of deep neural models, this objective has been partially attained for English clinical texts, however, the research community still lacks an effective solution for languages with limited resources. In this paper, we present the approach we developed to extract procedures, drugs, and diseases from oncology health records written in European Portuguese. This project was conducted in collaboration with the Portuguese Institute for Oncology which, besides holding over 10 years of duly protected medical records, also provided oncologist expertise throughout the development of the project. Since there is no annotated corpus for biomedical entity extraction in Portuguese, we also present the strategy we followed in annotating the corpus for the development of the models. The final models, which combined a neural architecture with entity linking, achieved F1 scores of 88.6, 95.0, and 55.8 per cent in the mention extraction of procedures, drugs, and diseases, respectively.
癌症患者的文本健康记录通常是冗长且高度无结构的,这使得卫生专业人员对患者的治疗过程进行完整的概述非常耗时。由于这些限制可能导致次优和/或低效的治疗程序,医疗保健提供者将从有效总结这些记录信息的系统中受益匪浅。随着深度神经模型的出现,这一目标在英语临床文本中已经部分实现,然而,对于资源有限的语言,研究界仍然缺乏有效的解决方案。在本文中,我们提出了我们开发的方法,从欧洲葡萄牙语写的肿瘤健康记录中提取程序,药物和疾病。该项目是与葡萄牙肿瘤研究所合作开展的,该研究所除了保存了10多年得到适当保护的医疗记录外,还在整个项目开发过程中提供了肿瘤学家的专业知识。由于没有葡萄牙语生物医学实体提取的注释语料库,我们还提出了我们在为模型开发注释语料库时遵循的策略。最终的模型结合了神经结构和实体链接,在程序、药物和疾病的提及提取方面分别获得了88.6、95.0和55.8%的F1分数。
{"title":"A Biomedical Entity Extraction Pipeline for Oncology Health Records in Portuguese","authors":"Hugo Sousa, Arian Pasquali, Alípio Jorge, Catarina Sousa Santos, M'ario Amorim Lopes","doi":"10.1145/3555776.3578577","DOIUrl":"https://doi.org/10.1145/3555776.3578577","url":null,"abstract":"Textual health records of cancer patients are usually protracted and highly unstructured, making it very time-consuming for health professionals to get a complete overview of the patient's therapeutic course. As such limitations can lead to suboptimal and/or inefficient treatment procedures, healthcare providers would greatly benefit from a system that effectively summarizes the information of those records. With the advent of deep neural models, this objective has been partially attained for English clinical texts, however, the research community still lacks an effective solution for languages with limited resources. In this paper, we present the approach we developed to extract procedures, drugs, and diseases from oncology health records written in European Portuguese. This project was conducted in collaboration with the Portuguese Institute for Oncology which, besides holding over 10 years of duly protected medical records, also provided oncologist expertise throughout the development of the project. Since there is no annotated corpus for biomedical entity extraction in Portuguese, we also present the strategy we followed in annotating the corpus for the development of the models. The final models, which combined a neural architecture with entity linking, achieved F1 scores of 88.6, 95.0, and 55.8 per cent in the mention extraction of procedures, drugs, and diseases, respectively.","PeriodicalId":42971,"journal":{"name":"Applied Computing Review","volume":"52 1","pages":""},"PeriodicalIF":1.0,"publicationDate":"2023-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75786805","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Towards automated verification of Bitcoin-based decentralised applications 迈向基于比特币的去中心化应用程序的自动验证
IF 1 Q4 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2023-03-27 DOI: 10.1145/3555776.3578996
Stefano Bistarelli, A. Bracciali, R. Klomp, Ivan Mercanti
The Bitcoin language SCRIPT has undergone several technically non-trivial updates, still striving from security and minimal risk exposure. Up-to-date, formal verification is of strong interest for script programs that validate the correctness of the Bitcoin decentralised ledger, and allow more and more sophisticated protocols and decentralised applications to be implemented on top of Bitcoin transactions. We propose ScriFy, a comprehensive framework for the verification of the current SCRIPT language: a symbolic semantics and execution model, a model checker, and a modular (dockered), open-source verifier. Given the SCRIPT code that locks a Bitcoin transaction, ScriFy returns the minimal information needed to successfully execute it and authorise the transaction. Distinguishably, ScriFy features both recently added SCRIPT operators and an enhanced analysis, which considers prior information in the ledger. The framework is proved correct and validated through significant examples.
比特币语言SCRIPT经历了几次技术上的重大更新,仍然在安全和风险最小化方面努力。对于验证比特币去中心化账本正确性的脚本程序来说,最新的、正式的验证非常有意义,并允许在比特币交易之上实现越来越复杂的协议和去中心化应用程序。我们提出了ScriFy,一个用于验证当前SCRIPT语言的综合框架:一个符号语义和执行模型,一个模型检查器,以及一个模块化(dockered)的开源验证器。给定锁定比特币交易的SCRIPT代码,ScriFy返回成功执行该交易并授权该交易所需的最小信息。值得注意的是,ScriFy具有最近添加的SCRIPT操作符和增强的分析功能(考虑分类帐中的先前信息)。通过实例验证了该框架的正确性。
{"title":"Towards automated verification of Bitcoin-based decentralised applications","authors":"Stefano Bistarelli, A. Bracciali, R. Klomp, Ivan Mercanti","doi":"10.1145/3555776.3578996","DOIUrl":"https://doi.org/10.1145/3555776.3578996","url":null,"abstract":"The Bitcoin language SCRIPT has undergone several technically non-trivial updates, still striving from security and minimal risk exposure. Up-to-date, formal verification is of strong interest for script programs that validate the correctness of the Bitcoin decentralised ledger, and allow more and more sophisticated protocols and decentralised applications to be implemented on top of Bitcoin transactions. We propose ScriFy, a comprehensive framework for the verification of the current SCRIPT language: a symbolic semantics and execution model, a model checker, and a modular (dockered), open-source verifier. Given the SCRIPT code that locks a Bitcoin transaction, ScriFy returns the minimal information needed to successfully execute it and authorise the transaction. Distinguishably, ScriFy features both recently added SCRIPT operators and an enhanced analysis, which considers prior information in the ledger. The framework is proved correct and validated through significant examples.","PeriodicalId":42971,"journal":{"name":"Applied Computing Review","volume":"199 1","pages":""},"PeriodicalIF":1.0,"publicationDate":"2023-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73956331","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Towards the support of Industrial IoT applications with TSCH 通过TSCH支持工业物联网应用
IF 1 Q4 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2023-03-27 DOI: 10.1145/3555776.3577752
Ivanilson F. Vieira Júnior, M. Curado, J. Granjal
Low-power and Lossy Networks (LLN) are utilised for numerous Internet of Things (IoT) applications. IEEE has specified the Time-slotted Channel Hopping (TSCH) Media Access Control (MAC) to target the needs of Industrial IoT. TSCH supports deterministic communications over unreliable wireless environments and balances energy, bandwidth and latency. Furthermore, the Minimal 6TiSCH configuration defined Routing Protocol for Low power and Lossy networks (RPL) with the Objective Function 0 (OF0). Inherent factors from RPL operation, such as joining procedure, parent switching, and trickle timer fluctuations, may introduce overhead and overload the network with control messages. The application and RPL control data may lead to an unpredicted networking bottleneck, potentially causing network instability. Hence, a stable RPL operation contributes to a healthy TSCH operation. In this paper, we explore TSCH MAC and RPL metrics to identify factors that lead to performance degradation and specify indicators to anticipate network disorders towards increasing Industrial IoT reliability. A TSCH Schedule Function might employ the identified aspects to foresee disturbances, proactively allocate the proper amount of cells, and avoid networking congestion.
低功耗和有损网络(LLN)用于许多物联网(IoT)应用。针对工业物联网的需求,IEEE指定了时隙信道跳频(TSCH)媒体访问控制(MAC)。TSCH支持不可靠无线环境下的确定性通信,并平衡能量、带宽和延迟。此外,最小6TiSCH配置定义了低功耗和有损网络(RPL)的路由协议,目标函数为0 (OF0)。来自RPL操作的固有因素,如加入过程、父节点交换和涓流定时器波动,可能会引入开销并使控制消息使网络过载。应用程序和RPL控制数据可能会导致不可预测的网络瓶颈,从而可能导致网络不稳定。因此,稳定的RPL操作有助于健康的TSCH操作。在本文中,我们探讨了TSCH MAC和RPL指标,以确定导致性能下降的因素,并指定指标来预测网络紊乱,以提高工业物联网的可靠性。TSCH调度函数可以使用识别的方面来预测干扰,主动分配适当数量的单元,并避免网络拥塞。
{"title":"Towards the support of Industrial IoT applications with TSCH","authors":"Ivanilson F. Vieira Júnior, M. Curado, J. Granjal","doi":"10.1145/3555776.3577752","DOIUrl":"https://doi.org/10.1145/3555776.3577752","url":null,"abstract":"Low-power and Lossy Networks (LLN) are utilised for numerous Internet of Things (IoT) applications. IEEE has specified the Time-slotted Channel Hopping (TSCH) Media Access Control (MAC) to target the needs of Industrial IoT. TSCH supports deterministic communications over unreliable wireless environments and balances energy, bandwidth and latency. Furthermore, the Minimal 6TiSCH configuration defined Routing Protocol for Low power and Lossy networks (RPL) with the Objective Function 0 (OF0). Inherent factors from RPL operation, such as joining procedure, parent switching, and trickle timer fluctuations, may introduce overhead and overload the network with control messages. The application and RPL control data may lead to an unpredicted networking bottleneck, potentially causing network instability. Hence, a stable RPL operation contributes to a healthy TSCH operation. In this paper, we explore TSCH MAC and RPL metrics to identify factors that lead to performance degradation and specify indicators to anticipate network disorders towards increasing Industrial IoT reliability. A TSCH Schedule Function might employ the identified aspects to foresee disturbances, proactively allocate the proper amount of cells, and avoid networking congestion.","PeriodicalId":42971,"journal":{"name":"Applied Computing Review","volume":"19 1","pages":""},"PeriodicalIF":1.0,"publicationDate":"2023-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87524582","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Towards a Recommender System-based Process for Managing Risks in Scrum Projects 基于推荐系统的Scrum项目风险管理流程
IF 1 Q4 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2023-03-27 DOI: 10.1145/3555776.3577748
Ademar França de Sousa Neto, F. Ramos, D. Albuquerque, Emanuel Dantas, M. Perkusich, H. Almeida, A. Perkusich
Agile Software Development (ASD) implicitly manages risks through, for example, its short development cycles (i.e., iterations). The absence of explicit risk management activities in ASD might be problematic since this approach cannot handle all types of risks, might cause risks (e.g., technical debt), and does not promote knowledge reuse throughout an organization. Thus, there is a need to bring discipline to agile risk management. This study focuses on bringing such discipline to organizations that conduct multiple projects to develop software products using ASD, specifically, the Scrum framework, which is the most popular way of adopting ASD. For this purpose, we developed a novel solution that was articulated in partnership with an industry partner. It is a process to complement the Scrum framework to use a recommender system that recommends risks and response plans for a target project, given the risks registered for similar projects in an organization's risk memory (i.e., database). We evaluated the feasibility of the proposed recommender system solution using pre-collected datasets from 17 projects from our industry partner. Since we used the KNN algorithm, we focused on finding the best configuration of k (i.e., the number of neighbors) and the similarity measure. As a result, the configuration with the best results had k = 6 (i.e., six neighbors) and used the Manhattan similarity measure, achieving precision = 45%; recall = 90%; and F1-score = 58%. The results show that the proposed recommender system can assist Scrum Teams in identifying risks and response plans, and it is promising to aid decision-making in Scrum-based projects. Thus, we concluded that our proposed recommender system-based risk management process is promising for helping Scrum Teams address risks more efficiently.
例如,敏捷软件开发(ASD)通过其短的开发周期(即迭代)隐含地管理风险。在ASD中缺乏明确的风险管理活动可能是有问题的,因为这种方法不能处理所有类型的风险,可能导致风险(例如,技术债务),并且不能促进整个组织中的知识重用。因此,有必要给敏捷风险管理带来纪律。本研究的重点是将这样的规则引入到使用ASD开发软件产品的多个项目的组织中,特别是Scrum框架,这是采用ASD的最流行的方式。为此,我们开发了一种新颖的解决方案,该解决方案是与行业合作伙伴合作提出的。这是一个补充Scrum框架的过程,使用推荐系统,根据组织风险记忆(即数据库)中类似项目的风险,为目标项目推荐风险和响应计划。我们使用我们的行业合作伙伴从17个项目中预先收集的数据集评估了建议的推荐系统解决方案的可行性。由于我们使用了KNN算法,我们专注于寻找k的最佳配置(即邻居的数量)和相似性度量。结果表明,k = 6(即6个邻居)的最佳配置使用曼哈顿相似性度量,精度为45%;召回率= 90%;F1-score = 58%。结果表明,建议的推荐系统可以帮助Scrum团队识别风险和响应计划,并且有望在基于Scrum的项目中帮助决策。因此,我们得出结论,我们建议的基于系统的风险管理流程有望帮助Scrum团队更有效地处理风险。
{"title":"Towards a Recommender System-based Process for Managing Risks in Scrum Projects","authors":"Ademar França de Sousa Neto, F. Ramos, D. Albuquerque, Emanuel Dantas, M. Perkusich, H. Almeida, A. Perkusich","doi":"10.1145/3555776.3577748","DOIUrl":"https://doi.org/10.1145/3555776.3577748","url":null,"abstract":"Agile Software Development (ASD) implicitly manages risks through, for example, its short development cycles (i.e., iterations). The absence of explicit risk management activities in ASD might be problematic since this approach cannot handle all types of risks, might cause risks (e.g., technical debt), and does not promote knowledge reuse throughout an organization. Thus, there is a need to bring discipline to agile risk management. This study focuses on bringing such discipline to organizations that conduct multiple projects to develop software products using ASD, specifically, the Scrum framework, which is the most popular way of adopting ASD. For this purpose, we developed a novel solution that was articulated in partnership with an industry partner. It is a process to complement the Scrum framework to use a recommender system that recommends risks and response plans for a target project, given the risks registered for similar projects in an organization's risk memory (i.e., database). We evaluated the feasibility of the proposed recommender system solution using pre-collected datasets from 17 projects from our industry partner. Since we used the KNN algorithm, we focused on finding the best configuration of k (i.e., the number of neighbors) and the similarity measure. As a result, the configuration with the best results had k = 6 (i.e., six neighbors) and used the Manhattan similarity measure, achieving precision = 45%; recall = 90%; and F1-score = 58%. The results show that the proposed recommender system can assist Scrum Teams in identifying risks and response plans, and it is promising to aid decision-making in Scrum-based projects. Thus, we concluded that our proposed recommender system-based risk management process is promising for helping Scrum Teams address risks more efficiently.","PeriodicalId":42971,"journal":{"name":"Applied Computing Review","volume":"30 1","pages":""},"PeriodicalIF":1.0,"publicationDate":"2023-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73470263","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Modeling a Conversational Agent using BDI Framework 使用BDI框架对会话代理建模
IF 1 Q4 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2023-03-27 DOI: 10.1145/3555776.3577657
Alexandre Yukio Ichida, Felipe Meneguzzi
Building conversational agents to help humans in domain-specific tasks is challenging since the agent needs to understand the natural language and act over it while accessing domain expert knowledge. Modern natural language processing techniques led to an expansion of conversational agents, with recent pretrained language models achieving increasingly accurate language recognition results using ever-larger open datasets. However, the black-box nature of such pretrained language models obscures the agent's reasoning and its motivations when responding, leading to unexplained dialogues. We develop a belief-desire-intention (BDI) agent as a task-oriented dialogue system to introduce mental attitudes similar to humans describing their behavior during a dialogue. We compare the resulting model with a pipeline dialogue model by leveraging existing components from dialogue systems and developing the agent's intention selection as a dialogue policy. We show that combining traditional agent modelling approaches, such as BDI, with more recent learning techniques can result in efficient and scrutable dialogue systems.
构建会话代理来帮助人类完成特定领域的任务是具有挑战性的,因为代理需要理解自然语言并在访问领域专家知识时对其进行操作。现代自然语言处理技术导致了对话代理的扩展,最近的预训练语言模型使用越来越大的开放数据集实现了越来越准确的语言识别结果。然而,这种预训练语言模型的黑箱性质模糊了智能体在响应时的推理和动机,导致无法解释的对话。我们开发了一个信念-欲望-意图(BDI)代理作为一个任务导向的对话系统,以引入类似于人类在对话中描述他们的行为的心理态度。我们通过利用对话系统中的现有组件并将代理的意图选择作为对话策略,将生成的模型与管道对话模型进行比较。我们表明,将传统的智能体建模方法(如BDI)与最新的学习技术相结合,可以产生高效且可分析的对话系统。
{"title":"Modeling a Conversational Agent using BDI Framework","authors":"Alexandre Yukio Ichida, Felipe Meneguzzi","doi":"10.1145/3555776.3577657","DOIUrl":"https://doi.org/10.1145/3555776.3577657","url":null,"abstract":"Building conversational agents to help humans in domain-specific tasks is challenging since the agent needs to understand the natural language and act over it while accessing domain expert knowledge. Modern natural language processing techniques led to an expansion of conversational agents, with recent pretrained language models achieving increasingly accurate language recognition results using ever-larger open datasets. However, the black-box nature of such pretrained language models obscures the agent's reasoning and its motivations when responding, leading to unexplained dialogues. We develop a belief-desire-intention (BDI) agent as a task-oriented dialogue system to introduce mental attitudes similar to humans describing their behavior during a dialogue. We compare the resulting model with a pipeline dialogue model by leveraging existing components from dialogue systems and developing the agent's intention selection as a dialogue policy. We show that combining traditional agent modelling approaches, such as BDI, with more recent learning techniques can result in efficient and scrutable dialogue systems.","PeriodicalId":42971,"journal":{"name":"Applied Computing Review","volume":"8 1","pages":""},"PeriodicalIF":1.0,"publicationDate":"2023-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74685130","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Applied Computing Review
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1