Current automotive E/E architectures are subject to significant transformations: Computing-power-intensive advanced driver-assistance systems, bandwidth-hungry infotainment systems, the connection of the vehicle with the internet and the consequential need for cyber-security drives the centralization of E/E architectures. A centralized architecture is often seen as a key enabler to master those challenges. Available research focuses mostly on the different types of E/E architectures and contrasts their advantages and disadvantages. There is a research gap on guidelines for system designers and function developers to analyze the potential of their systems for centralization. The present paper aims to quantify centralization potential reviewing relevant literature and conducting qualitative interviews with industry practitioners. In literature, we identified seven key automotive system properties reaching limitations in current automotive architectures: busload, functional safety, computing power, feature dependencies, development and maintenance costs, error rate, modularity and flexibility. These properties serve as quantitative evaluation criteria to estimate whether centralization would enhance overall system performance. In the interviews, we have validated centralization and its fundament – the conceptual systems engineering – as capabilities to mitigate these limitations. By focusing on practical insights and lessons learned, this research provides system designers with actionable guidance to optimize their systems, addressing the outlined challenges while avoiding monolithic architecture. This paper bridges the gap between theoretical research and practical application, offering valuable takeaways for practitioners.
Pseudocode can efficiently represent algorithm logic, but manual conversion to executable code requires more time. Recent works have applied autoregressive (AR) models to automate pseudocode-to-code conversion, achieving good results but slow generation speed. Non-autoregressive (NAR) models offer the advantage of parallel generation. However, they face challenges in effectively capturing contextual information, leading to a potential degradation in the quality of the generated output. This paper presents an improved NAR model for balancing quality and efficiency in pseudocode conversion. Firstly, two strategies are proposed to address out-of-vocabulary and repetition problems. Secondly, an improved NAR model is built using linear smoothing and adaptive techniques in the transition matrix, which can mitigate the “winner takes all” effect. Finally, a new synthesis potential metric is proposed for evaluating pseudocode conversion. Experimental results show that the proposed method matches AR model performance while accelerating generation over 10-fold. Further, the proposed NAR model reduces the gap with the AR model in terms of the BLEU score on the EN-DE and DE-EN tasks of the WMT14 machine translation.
To ensure the proper testing of any software product, it is imperative to cover various functional and non-functional requirements at different testing levels (e.g., unit or integration testing). Ensuring appropriate testing requires making a series of decisions—e.g., assigning features to distinct Continuous Integration (CI) configurations or determining which test specifications to automate. Such decisions are generally made manually and require in-depth domain knowledge. This study introduces, implements, and evaluates ITMOS (Intelligent Test Management Optimization System), an intelligent test management system designed to optimize decision-making during the software testing process. ITMOS efficiently processes new requirements presented in natural language, segregating each requirement into appropriate CI configurations based on predefined quality criteria. Additionally, ITMOS has the capability to suggest a set of test specifications for test automation. The feasibility and potential applicability of the proposed solution were empirically evaluated in an industrial telecommunications project at Ericsson. In this context, ITMOS achieved accurate results for decision-making tasks, exceeding the requirements set by domain experts.
Testing and debugging software to fix bugs is considered one of the most important stages of the software life cycle. Many studies have investigated ways to predict bugs in software artifacts using machine learning techniques. It is important to consider the explanatory aspects of such models for reliable prediction. In this paper, we show how feature transformation can significantly improve prediction accuracy and provide insight into the inner workings of bug prediction models. We propose a new approach for bug prediction that first extracts the features, then finds a weighted transformation of these features using a genetic algorithm that best separates bugs from non-bugs when plotted in a low-dimensional space, and finally, trains predictive models using the transformed dataset. In our experiment using the proposed feature transformation, the traditional machine learning and deep learning classifiers achieved an average improvement of 4.25% and 9.6% in recall values for bug classification over 8 software systems compared to the models built on original data. We also examined the generalizability of our concept for multiclass classification tasks such as commit classification in software systems and found modest improvements in F1-scores (sometimes up to 3%) for traditional machine learning models and 4% with deep learning models.
Several studies have investigated attributes of great software practitioners. However, the investigation of such attributes is still missing in Requirements Engineering (RE). The current knowledge on attributes of great software practitioners might not be easily translated to the context of RE because its activities are, usually, less technical and more human-centered than other software engineering activities.
This work aims to investigate which are the attributes of great requirements engineers, the relationship between them, and strategies that can be employed to obtain these attributes. We follow a method composed of a survey with 18 practitioners and follow up interviews with 11 of them.
Investigative ability in talking to stakeholders, judicious, and understand the business are the most commonly mentioned attributes amongst the set of 22 attributes identified, which were grouped into four categories. We also found 38 strategies to improve RE skills. Examples are training, talking to all stakeholders, and acquiring domain knowledge.
The attributes, their categories, and relationships are organized into a map. The relations between attributes and strategies are represented in a Sankey diagram. Software practitioners can use our findings to improve their understanding about the role and responsibilities of requirements engineers.