Mohammed Naif Alatawi, Saleh Alyahyan, Shariq Hussain, Abdullah Alshammari, Abdullah A. Aldaeej, Ibrahim Khalil Alali, H. Alwageed
In the realm of software project management, predicting and mitigating risks are pivotal for successful project execution. Traditional risk assessment methods have limitations in handling complex and dynamic software projects. This study presents a novel approach that leverages artificial neural networks (ANNs) to enhance risk prediction accuracy. We utilize historical project data, encompassing project complexity, financial factors, performance metrics, schedule adherence, and user-related variables, to train the ANN model. Our approach involves optimizing the ANN architecture, with various configurations tested to identify the most effective setup. We compare the performance of mean squared error (MSE) and mean absolute error (MAE) as error functions and find that MAE yields superior results. Furthermore, we demonstrate the effectiveness of our model through comprehensive risk assessment. We predict both the overall project risk and individual risk factors, providing project managers with a valuable tool for risk mitigation. Validation results confirm the robustness of our approach when applied to previously unseen data. The achieved accuracy of 97.12% (or 99.12% with uncertainty consideration) underscores the potential of ANNs in risk management. This research contributes to the software project management field by offering an innovative and highly accurate risk assessment model. It empowers project managers to make informed decisions and proactively address potential risks, ultimately enhancing project success.
在软件项目管理领域,预测和降低风险是成功执行项目的关键。传统的风险评估方法在处理复杂多变的软件项目时存在局限性。本研究提出了一种利用人工神经网络(ANN)提高风险预测准确性的新方法。我们利用历史项目数据(包括项目复杂性、财务因素、性能指标、进度遵守情况和用户相关变量)来训练 ANN 模型。我们的方法包括优化 ANN 架构,测试各种配置以确定最有效的设置。我们比较了作为误差函数的均方误差 (MSE) 和均方绝对误差 (MAE) 的性能,发现 MAE 能产生更好的结果。此外,我们还通过综合风险评估证明了模型的有效性。我们既能预测项目的整体风险,也能预测单个风险因素,为项目经理提供了一个降低风险的宝贵工具。验证结果证实了我们的方法在应用于以前未见的数据时的稳健性。所达到的 97.12% 的准确率(或考虑不确定性后的 99.12%)彰显了人工智能网络在风险管理方面的潜力。这项研究为软件项目管理领域做出了贡献,提供了一个创新的高精度风险评估模型。它使项目经理能够做出明智的决策并积极应对潜在风险,最终提高项目的成功率。
{"title":"A Data-Driven Artificial Neural Network Approach to Software Project Risk Assessment","authors":"Mohammed Naif Alatawi, Saleh Alyahyan, Shariq Hussain, Abdullah Alshammari, Abdullah A. Aldaeej, Ibrahim Khalil Alali, H. Alwageed","doi":"10.1049/2023/4324783","DOIUrl":"https://doi.org/10.1049/2023/4324783","url":null,"abstract":"In the realm of software project management, predicting and mitigating risks are pivotal for successful project execution. Traditional risk assessment methods have limitations in handling complex and dynamic software projects. This study presents a novel approach that leverages artificial neural networks (ANNs) to enhance risk prediction accuracy. We utilize historical project data, encompassing project complexity, financial factors, performance metrics, schedule adherence, and user-related variables, to train the ANN model. Our approach involves optimizing the ANN architecture, with various configurations tested to identify the most effective setup. We compare the performance of mean squared error (MSE) and mean absolute error (MAE) as error functions and find that MAE yields superior results. Furthermore, we demonstrate the effectiveness of our model through comprehensive risk assessment. We predict both the overall project risk and individual risk factors, providing project managers with a valuable tool for risk mitigation. Validation results confirm the robustness of our approach when applied to previously unseen data. The achieved accuracy of 97.12% (or 99.12% with uncertainty consideration) underscores the potential of ANNs in risk management. This research contributes to the software project management field by offering an innovative and highly accurate risk assessment model. It empowers project managers to make informed decisions and proactively address potential risks, ultimately enhancing project success.","PeriodicalId":50378,"journal":{"name":"IET Software","volume":" 3","pages":""},"PeriodicalIF":1.6,"publicationDate":"2023-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138960694","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mobile applications are continuously increasing in prevalence. One of the main challenges in mobile application development is creating cross-platform applications. To facilitate developing cross-platform applications, the software engineering community created several solutions, one of which is React Native (RN), which is a popular cross-platform framework. The software engineering literature demonstrated the effectiveness of Stack Overflow (SO) in providing real-world perspectives on a variety of technical subjects. Therefore, this study aims to gain a better understanding of the stance of RN on SO. We identified and analyzed 131,620 SO RN-related questions. Moreover, we observed how the interest toward RN on SO evolves over time. Additionally, we utilized Latent Dirichlet Allocation (LDA) to identify RN-related topics that are discussed within the questions. Afterward, we utilized a number of proxy measures to estimate the popularity and difficulty of these topics. The results revealed that interest toward RN on SO was generally increasing. Moreover, RN-related questions revolve around six topics, with the topics of layout and navigation being the most popular and the topic of iOS issues being the most difficult. Software engineering researchers, practitioners, educators, and RN contributors may find the results of this study beneficial in guiding their future RN efforts.
{"title":"An Observational Study on React Native (RN) Questions on Stack Overflow (SO)","authors":"Luluh Albesher, Razan Aldossari, Reem Alfayez","doi":"10.1049/2023/6613434","DOIUrl":"https://doi.org/10.1049/2023/6613434","url":null,"abstract":"Mobile applications are continuously increasing in prevalence. One of the main challenges in mobile application development is creating cross-platform applications. To facilitate developing cross-platform applications, the software engineering community created several solutions, one of which is React Native (RN), which is a popular cross-platform framework. The software engineering literature demonstrated the effectiveness of Stack Overflow (SO) in providing real-world perspectives on a variety of technical subjects. Therefore, this study aims to gain a better understanding of the stance of RN on SO. We identified and analyzed 131,620 SO RN-related questions. Moreover, we observed how the interest toward RN on SO evolves over time. Additionally, we utilized Latent Dirichlet Allocation (LDA) to identify RN-related topics that are discussed within the questions. Afterward, we utilized a number of proxy measures to estimate the popularity and difficulty of these topics. The results revealed that interest toward RN on SO was generally increasing. Moreover, RN-related questions revolve around six topics, with the topics of layout and navigation being the most popular and the topic of iOS issues being the most difficult. Software engineering researchers, practitioners, educators, and RN contributors may find the results of this study beneficial in guiding their future RN efforts.","PeriodicalId":50378,"journal":{"name":"IET Software","volume":"1 1","pages":""},"PeriodicalIF":1.6,"publicationDate":"2023-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139199214","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
With the rapid development of artificial intelligence and digital media technology, modern animation technology has greatly improved the creative efficiency of creators through computer-generated graphics, electronic manual painting, and other means, and its number has also experienced explosive growth. The intelligent completion of emotional expression identification within animation works holds immense significance for both animation production learners and the creation of intelligent animation works. Consequently, emotion recognition has emerged as a focal point of research attention. This paper focuses on the analysis of emotional states in animation works. First, by analyzing the characteristics of emotional expression in animation, the model data foundation for using sound and video information is determined. Subsequently, we perform individual feature extraction for these two types of information using gated recurrent unit (GRU). Finally, we employ a multiattention mechanism to fuse the multimodal information derived from audio and video sources. The experimental outcomes demonstrate that the proposed method framework attains a recognition accuracy exceeding 90% for the three distinct emotional categories. Remarkably, the recognition rate for negative emotions reaches an impressive 94.7%, significantly surpassing the performance of single-modal approaches and other feature fusion methods. This research presents invaluable insights for the training of multimedia animation production professionals, empowering them to better grasp the nuances of emotion transfer within animation and, thereby, realize productions of elevated quality, which will greatly improve the market operational efficiency of animation industry.
{"title":"Analysis of Emotional Deconstruction and the Role of Emotional Value for Learners in Animation Works Based on Digital Multimedia Technology","authors":"Shilei Liang","doi":"10.1049/2023/5566781","DOIUrl":"https://doi.org/10.1049/2023/5566781","url":null,"abstract":"With the rapid development of artificial intelligence and digital media technology, modern animation technology has greatly improved the creative efficiency of creators through computer-generated graphics, electronic manual painting, and other means, and its number has also experienced explosive growth. The intelligent completion of emotional expression identification within animation works holds immense significance for both animation production learners and the creation of intelligent animation works. Consequently, emotion recognition has emerged as a focal point of research attention. This paper focuses on the analysis of emotional states in animation works. First, by analyzing the characteristics of emotional expression in animation, the model data foundation for using sound and video information is determined. Subsequently, we perform individual feature extraction for these two types of information using gated recurrent unit (GRU). Finally, we employ a multiattention mechanism to fuse the multimodal information derived from audio and video sources. The experimental outcomes demonstrate that the proposed method framework attains a recognition accuracy exceeding 90% for the three distinct emotional categories. Remarkably, the recognition rate for negative emotions reaches an impressive 94.7%, significantly surpassing the performance of single-modal approaches and other feature fusion methods. This research presents invaluable insights for the training of multimedia animation production professionals, empowering them to better grasp the nuances of emotion transfer within animation and, thereby, realize productions of elevated quality, which will greatly improve the market operational efficiency of animation industry.","PeriodicalId":50378,"journal":{"name":"IET Software","volume":"48 2","pages":""},"PeriodicalIF":1.6,"publicationDate":"2023-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139247579","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The performance of software defect prediction (SDP) models determines the priority of test resource allocation. Researchers also use interpretability techniques to gain empirical knowledge about software quality from SDP models. However, SDP methods designed in the past research rarely consider the impact of data transformation methods, simple but commonly used preprocessing techniques, on the performance and interpretability of SDP models. Therefore, in this paper, we investigate the impact of three data transformation methods (Log, Minmax, and Z-score) on the performance and interpretability of SDP models. Through empirical research on (i) six classification techniques (random forest, decision tree, logistic regression, Naive Bayes, K-nearest neighbors, and multilayer perceptron), (ii) six performance evaluation indicators (Accuracy, Precision, Recall, F1, MCC, and AUC), (iii) two interpretable methods (permutation and SHAP), (iv) two feature importance measures (Top-k feature rank overlap and difference), and (v) three datasets (Promise, Relink, and AEEEM), our results show that the data transformation methods can significantly improve the performance of the SDP models and greatly affect the variation of the most important features. Specifically, the impact of data transformation methods on the performance and interpretability of SDP models depends on the classification techniques and evaluation indicators. We observe that log transformation improves NB model performance by 7%–61% on the other five indicators with a 5% drop in Precision. Minmax and Z-score transformation improves NB model performance by 2%–9% across all indicators. However, all three transformation methods lead to substantial changes in the Top-5 important feature ranks, with differences exceeding 2 in 40%–80% of cases (detailed results available in the main content). Based on our findings, we recommend that (1) considering the impact of data transformation methods on model performance and interpretability when designing SDP approaches as transformations can improve model accuracy, and potentially obscure important features, which lead to challenges in interpretation, (2) conducting comparative experiments with and without the transformations to validate the effectiveness of proposed methods which are designed to improve the prediction performance, and (3) tracking changes in the most important features before and after applying data transformation methods to ensure precise and traceable interpretability conclusions to gain insights. Our study reminds researchers and practitioners of the need for comprehensive considerations even when using other similar simple data processing methods.
{"title":"Evaluating the Impact of Data Transformation Techniques on the Performance and Interpretability of Software Defect Prediction Models","authors":"Yu Zhao, Zhiqiu Huang, Lina Gong, Yi Zhu, Qiao Yu, Yuxiang Gao","doi":"10.1049/2023/6293074","DOIUrl":"https://doi.org/10.1049/2023/6293074","url":null,"abstract":"The performance of software defect prediction (SDP) models determines the priority of test resource allocation. Researchers also use interpretability techniques to gain empirical knowledge about software quality from SDP models. However, SDP methods designed in the past research rarely consider the impact of data transformation methods, simple but commonly used preprocessing techniques, on the performance and interpretability of SDP models. Therefore, in this paper, we investigate the impact of three data transformation methods (Log, Minmax, and Z-score) on the performance and interpretability of SDP models. Through empirical research on (i) six classification techniques (random forest, decision tree, logistic regression, Naive Bayes, K-nearest neighbors, and multilayer perceptron), (ii) six performance evaluation indicators (Accuracy, Precision, Recall, F1, MCC, and AUC), (iii) two interpretable methods (permutation and SHAP), (iv) two feature importance measures (Top-k feature rank overlap and difference), and (v) three datasets (Promise, Relink, and AEEEM), our results show that the data transformation methods can significantly improve the performance of the SDP models and greatly affect the variation of the most important features. Specifically, the impact of data transformation methods on the performance and interpretability of SDP models depends on the classification techniques and evaluation indicators. We observe that log transformation improves NB model performance by 7%–61% on the other five indicators with a 5% drop in Precision. Minmax and Z-score transformation improves NB model performance by 2%–9% across all indicators. However, all three transformation methods lead to substantial changes in the Top-5 important feature ranks, with differences exceeding 2 in 40%–80% of cases (detailed results available in the main content). Based on our findings, we recommend that (1) considering the impact of data transformation methods on model performance and interpretability when designing SDP approaches as transformations can improve model accuracy, and potentially obscure important features, which lead to challenges in interpretation, (2) conducting comparative experiments with and without the transformations to validate the effectiveness of proposed methods which are designed to improve the prediction performance, and (3) tracking changes in the most important features before and after applying data transformation methods to ensure precise and traceable interpretability conclusions to gain insights. Our study reminds researchers and practitioners of the need for comprehensive considerations even when using other similar simple data processing methods.","PeriodicalId":50378,"journal":{"name":"IET Software","volume":"56 5","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134991190","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
BTR code (originally—“Beam Transmission and Re-ionization”, 1995) is used for Neutral Beam Injection (NBI) design; it is also applied to the injector system of ITER. In 2008, the BTR model was extended to include the beam interaction with plasmas and direct beam losses in tokamak. For many years, BTR has been widely used for various NBI designs for efficient heating and current drive in nuclear fusion devices for plasma scenario control and diagnostics. BTR analysis is especially important for ‘beam-driven’ fusion devices, such as fusion neutron source (FNS) tokamaks, since their operation depends on a high NBI input in non-inductive current drive and fusion yield. BTR calculates detailed power deposition maps and particle losses with an account of ionized beam fractions and background electromagnetic fields; these results are used for the overall NBI performance analysis. BTR code is open for public usage; it is fully interactive and supplied with an intuitive graphical user interface (GUI). The input configuration is flexibly adapted to any specific NBI geometry. High running speed and full control over the running options allow the user to perform multiple parametric runs on the fly. The paper describes the detailed physics of BTR, numerical methods, graphical user interface, and examples of BTR application. The code is still in evolution; basic support is available to all BTR users.
{"title":"Beam Transmission (BTR) Software for Efficient Neutral Beam Injector Design and Tokamak Operation","authors":"Eugenia Dlougach, Margarita Kichik","doi":"10.3390/software2040022","DOIUrl":"https://doi.org/10.3390/software2040022","url":null,"abstract":"BTR code (originally—“Beam Transmission and Re-ionization”, 1995) is used for Neutral Beam Injection (NBI) design; it is also applied to the injector system of ITER. In 2008, the BTR model was extended to include the beam interaction with plasmas and direct beam losses in tokamak. For many years, BTR has been widely used for various NBI designs for efficient heating and current drive in nuclear fusion devices for plasma scenario control and diagnostics. BTR analysis is especially important for ‘beam-driven’ fusion devices, such as fusion neutron source (FNS) tokamaks, since their operation depends on a high NBI input in non-inductive current drive and fusion yield. BTR calculates detailed power deposition maps and particle losses with an account of ionized beam fractions and background electromagnetic fields; these results are used for the overall NBI performance analysis. BTR code is open for public usage; it is fully interactive and supplied with an intuitive graphical user interface (GUI). The input configuration is flexibly adapted to any specific NBI geometry. High running speed and full control over the running options allow the user to perform multiple parametric runs on the fly. The paper describes the detailed physics of BTR, numerical methods, graphical user interface, and examples of BTR application. The code is still in evolution; basic support is available to all BTR users.","PeriodicalId":50378,"journal":{"name":"IET Software","volume":"42 3","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135266187","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hongru Yang, Jinchen Xu, Jiangwei Hao, Zuoyan Zhang, Bei Zhou
The use of floating-point numbers inevitably leads to inaccurate results and, in certain cases, significant program failures. Detecting floating-point errors is critical to ensuring that floating-point programs outputs are proper. However, due to the sparsity of floating-point errors, only a limited number of inputs can cause significant floating-point errors, and determining how to detect these inputs and to selecting the appropriate search technique is critical to detecting significant errors. This paper proposes characteristic particle swarm optimization (CPSO) algorithm based on particle swarm optimization (PSO) algorithm. The floating-point expression error detection tool PSOED is implemented, which can detect significant errors in floating-point arithmetic expressions and provide corresponding input. The method presented in this paper is based on two insights: (1) treating floating-point error detection as a search problem and selecting reliable heuristic search strategies to solve the problem; (2) fully utilizing the error distribution laws of expressions and the distribution characteristics of floating-point numbers to guide the search space generation and improve the search efficiency. This paper selects 28 expressions from the FPBench standard set as test cases, uses PSOED to detect the maximum error of the expressions, and compares them to the current dynamic error detection tools S3FP and Herbie. PSOED detects the maximum error 100% better than S3FP, 68% better than Herbie, and 14% equivalent to Herbie. The results of the experiments indicate that PSOED can detect significant floating-point expression errors.
{"title":"Detecting Floating-Point Expression Errors Based Improved PSO Algorithm","authors":"Hongru Yang, Jinchen Xu, Jiangwei Hao, Zuoyan Zhang, Bei Zhou","doi":"10.1049/2023/6681267","DOIUrl":"https://doi.org/10.1049/2023/6681267","url":null,"abstract":"The use of floating-point numbers inevitably leads to inaccurate results and, in certain cases, significant program failures. Detecting floating-point errors is critical to ensuring that floating-point programs outputs are proper. However, due to the sparsity of floating-point errors, only a limited number of inputs can cause significant floating-point errors, and determining how to detect these inputs and to selecting the appropriate search technique is critical to detecting significant errors. This paper proposes characteristic particle swarm optimization (CPSO) algorithm based on particle swarm optimization (PSO) algorithm. The floating-point expression error detection tool PSOED is implemented, which can detect significant errors in floating-point arithmetic expressions and provide corresponding input. The method presented in this paper is based on two insights: (1) treating floating-point error detection as a search problem and selecting reliable heuristic search strategies to solve the problem; (2) fully utilizing the error distribution laws of expressions and the distribution characteristics of floating-point numbers to guide the search space generation and improve the search efficiency. This paper selects 28 expressions from the FPBench standard set as test cases, uses PSOED to detect the maximum error of the expressions, and compares them to the current dynamic error detection tools S3FP and Herbie. PSOED detects the maximum error 100% better than S3FP, 68% better than Herbie, and 14% equivalent to Herbie. The results of the experiments indicate that PSOED can detect significant floating-point expression errors.","PeriodicalId":50378,"journal":{"name":"IET Software","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135413335","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Deuslirio da Silva-Junior, Valdemar V. Graciano-Neto, Diogo M. de-Freitas, Plino de Sá Leitão-Junior, Mohamad Kassab
Software testing and debugging are standard practices of software quality assurance since they enable the identification and correction of failures. Benchmarks have been used in that context as a group of programs to support the comparison of different techniques according to pre-established parameters. However, the reasons that inspire researchers to propose novel benchmarks are not fully understood. This article reports the investigation, identification, classification, and externalization of the state of the art about the proposition of benchmarks on software testing and debugging domains. The study was carried out using systematic mapping procedures according to the guidelines widely followed by software engineering literature. The search identified 1674 studies, from which, 25 were selected for analysis. A list of benchmarks is provided and descriptively mapped according to their characteristics, motivations, and scope of use for their creation. The lack of data to support the comparison between available and novel software testing and debugging techniques is the main motivation for the proposition of benchmarks. Advancements in the standardization and prescription of benchmark structure and composition are still required. Establishing such a standard could foster benchmark reuse, thereby saving time and effort in the engineering of benchmarks for software testing and debugging.
{"title":"A Systematic Mapping of the Proposition of Benchmarks in the Software Testing and Debugging Domain","authors":"Deuslirio da Silva-Junior, Valdemar V. Graciano-Neto, Diogo M. de-Freitas, Plino de Sá Leitão-Junior, Mohamad Kassab","doi":"10.3390/software2040021","DOIUrl":"https://doi.org/10.3390/software2040021","url":null,"abstract":"Software testing and debugging are standard practices of software quality assurance since they enable the identification and correction of failures. Benchmarks have been used in that context as a group of programs to support the comparison of different techniques according to pre-established parameters. However, the reasons that inspire researchers to propose novel benchmarks are not fully understood. This article reports the investigation, identification, classification, and externalization of the state of the art about the proposition of benchmarks on software testing and debugging domains. The study was carried out using systematic mapping procedures according to the guidelines widely followed by software engineering literature. The search identified 1674 studies, from which, 25 were selected for analysis. A list of benchmarks is provided and descriptively mapped according to their characteristics, motivations, and scope of use for their creation. The lack of data to support the comparison between available and novel software testing and debugging techniques is the main motivation for the proposition of benchmarks. Advancements in the standardization and prescription of benchmark structure and composition are still required. Establishing such a standard could foster benchmark reuse, thereby saving time and effort in the engineering of benchmarks for software testing and debugging.","PeriodicalId":50378,"journal":{"name":"IET Software","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136013815","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The core reasoning task for datalog engines is materialization, the evaluation of a datalog program over a database alongside its physical incorporation into the database itself. The de-facto method of computing is through the recursive application of inference rules. Due to it being a costly operation, it is a must for datalog engines to provide incremental materialization; that is, to adjust the computation to new data instead of restarting from scratch. One of the major caveats is that deleting data is notoriously more involved than adding since one has to take into account all possible data that has been entailed from what is being deleted. Differential dataflow is a computational model that provides efficient incremental maintenance, notoriously with equal performance between additions and deletions, and work distribution of iterative dataflows. In this paper, we investigate the performance of materialization with three reference datalog implementations, out of which one is built on top of a lightweight relational engine, and the two others are differential-dataflow and non-differential versions of the same rewrite algorithm with the same optimizations. Experimental results suggest that monotonic aggregation is more powerful than ascenting merely the powerset lattice.
{"title":"A Differential Datalog Interpreter","authors":"Matthew James Stephenson","doi":"10.3390/software2030020","DOIUrl":"https://doi.org/10.3390/software2030020","url":null,"abstract":"The core reasoning task for datalog engines is materialization, the evaluation of a datalog program over a database alongside its physical incorporation into the database itself. The de-facto method of computing is through the recursive application of inference rules. Due to it being a costly operation, it is a must for datalog engines to provide incremental materialization; that is, to adjust the computation to new data instead of restarting from scratch. One of the major caveats is that deleting data is notoriously more involved than adding since one has to take into account all possible data that has been entailed from what is being deleted. Differential dataflow is a computational model that provides efficient incremental maintenance, notoriously with equal performance between additions and deletions, and work distribution of iterative dataflows. In this paper, we investigate the performance of materialization with three reference datalog implementations, out of which one is built on top of a lightweight relational engine, and the two others are differential-dataflow and non-differential versions of the same rewrite algorithm with the same optimizations. Experimental results suggest that monotonic aggregation is more powerful than ascenting merely the powerset lattice.","PeriodicalId":50378,"journal":{"name":"IET Software","volume":"81 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136236435","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Microservices have emerged as a prevalent architectural style in modern software development, replacing traditional monolithic architectures. The decomposition of business functionality into distributed microservices offers numerous benefits, but introduces increased complexity to the overall application. Consequently, the complexity of authorization in microservice-based applications necessitates a comprehensive approach that integrates authorization as an inherent component from the beginning. This paper presents a systematic approach for achieving fine-grained user authorization using Attribute-Based Access Control (ABAC). The proposed approach emphasizes structure preservation, facilitating traceability throughout the various phases of application development. As a result, authorization artifacts can be traced seamlessly from the initial analysis phase to the subsequent implementation phase. One significant contribution is the development of a language to formulate natural language authorization requirements and policies. These natural language authorization policies can subsequently be implemented using the policy language Rego. By leveraging the analysis of software artifacts, the proposed approach enables the creation of comprehensive and tailored authorization policies.
{"title":"User Authorization in Microservice-Based Applications","authors":"Niklas Sänger, Sebastian Abeck","doi":"10.3390/software2030019","DOIUrl":"https://doi.org/10.3390/software2030019","url":null,"abstract":"Microservices have emerged as a prevalent architectural style in modern software development, replacing traditional monolithic architectures. The decomposition of business functionality into distributed microservices offers numerous benefits, but introduces increased complexity to the overall application. Consequently, the complexity of authorization in microservice-based applications necessitates a comprehensive approach that integrates authorization as an inherent component from the beginning. This paper presents a systematic approach for achieving fine-grained user authorization using Attribute-Based Access Control (ABAC). The proposed approach emphasizes structure preservation, facilitating traceability throughout the various phases of application development. As a result, authorization artifacts can be traced seamlessly from the initial analysis phase to the subsequent implementation phase. One significant contribution is the development of a language to formulate natural language authorization requirements and policies. These natural language authorization policies can subsequently be implemented using the policy language Rego. By leveraging the analysis of software artifacts, the proposed approach enables the creation of comprehensive and tailored authorization policies.","PeriodicalId":50378,"journal":{"name":"IET Software","volume":"154 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135063218","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Bui Quang Truong, Anh Nguyen-Duc, Nguyen Thi Cam Van
In recent years, research on digital transformation (DT) and business process management (BPM) has gained significant attention in the field of business and management. This paper aims to conduct a comprehensive bibliometric analysis of global research on DT and BPM from 2007 to 2022. A total of 326 papers were selected from Web of Science and Scopus for analysis. Using bibliometric methods, we evaluated the current state and future research trends of DT and BPM. Our analysis reveals that the number of publications on DT and BPM has grown significantly over time, with the Business Process Management Journal being the most active. The countries that have contributed the most to this field are Germany (with four universities in the top 10) and the USA. The Business Process Management Journal is the most active in publishing research on digital transformation and business process management. The analysis showed that “artificial intelligence” is a technology that has been studied extensively and is increasingly asserted to influence companies’ business processes. Additionally, the study provides valuable insights from the co-citation network analysis. Based on our findings, we provide recommendations for future research directions on DT and BPM. This study contributes to a better understanding of the current state of research on DT and BPM and provides insights for future research.
近年来,数字化转型(DT)和业务流程管理(BPM)的研究受到了商业和管理领域的广泛关注。本文旨在对2007年至2022年全球关于DT和BPM的研究进行全面的文献计量分析。从Web of Science和Scopus中选取326篇论文进行分析。采用文献计量学的方法,评价了DT和BPM的研究现状和未来的研究趋势。我们的分析显示,随着时间的推移,关于DT和BPM的出版物数量显著增长,其中业务流程管理期刊(Business Process Management Journal)最为活跃。在这一领域贡献最大的国家是德国(有4所大学进入前10名)和美国。《业务流程管理期刊》是在数字化转型和业务流程管理方面发表研究最活跃的期刊。分析显示,“人工智能”是一项被广泛研究的技术,越来越多的人认为它会影响企业的业务流程。此外,本研究还从共被引网络分析中提供了有价值的见解。在此基础上,对未来的研究方向提出了建议。本研究有助于更好地了解DT和BPM的研究现状,并为未来的研究提供见解。
{"title":"A Quantitative Review of the Research on Business Process Management in Digital Transformation: A Bibliometric Approach","authors":"Bui Quang Truong, Anh Nguyen-Duc, Nguyen Thi Cam Van","doi":"10.3390/software2030018","DOIUrl":"https://doi.org/10.3390/software2030018","url":null,"abstract":"In recent years, research on digital transformation (DT) and business process management (BPM) has gained significant attention in the field of business and management. This paper aims to conduct a comprehensive bibliometric analysis of global research on DT and BPM from 2007 to 2022. A total of 326 papers were selected from Web of Science and Scopus for analysis. Using bibliometric methods, we evaluated the current state and future research trends of DT and BPM. Our analysis reveals that the number of publications on DT and BPM has grown significantly over time, with the Business Process Management Journal being the most active. The countries that have contributed the most to this field are Germany (with four universities in the top 10) and the USA. The Business Process Management Journal is the most active in publishing research on digital transformation and business process management. The analysis showed that “artificial intelligence” is a technology that has been studied extensively and is increasingly asserted to influence companies’ business processes. Additionally, the study provides valuable insights from the co-citation network analysis. Based on our findings, we provide recommendations for future research directions on DT and BPM. This study contributes to a better understanding of the current state of research on DT and BPM and provides insights for future research.","PeriodicalId":50378,"journal":{"name":"IET Software","volume":"84 2 1","pages":""},"PeriodicalIF":1.6,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89314142","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}