Pub Date : 2019-03-01DOI: 10.1109/ICSA-C.2019.00032
J. A. D. Pace, A. Bianchi
Capturing and communicating the architecture decisions of a project is very important in architecture knowledge management, so that those decisions can deliver value to the system stakeholders and also support the system implementation. In agile development contexts, there is often a balancing act between documenting the design decisions in detail and keeping the documentation efforts to a level tolerable for the project. To this end, we present the notion of High-level Design stories (or HLDs), as small, modular artifacts that record the main design decisions and their context, but also include information about architecture assumptions, quality-attribute analysis, and pending issues for the system. HLDs are intended to be created and refined during the different phases of an architecture-centric development process, and assist in the validation of the decisions (and pending issues) in that process. This way, a global (although detailed) architecture design can be obtained from the combination of the HLDs. In this work, we discuss the pros and cons of using HLDs for design decisions based on experiences from an industrial software project.
{"title":"High-Level Design Stories in Architecture-Centric Agile Development","authors":"J. A. D. Pace, A. Bianchi","doi":"10.1109/ICSA-C.2019.00032","DOIUrl":"https://doi.org/10.1109/ICSA-C.2019.00032","url":null,"abstract":"Capturing and communicating the architecture decisions of a project is very important in architecture knowledge management, so that those decisions can deliver value to the system stakeholders and also support the system implementation. In agile development contexts, there is often a balancing act between documenting the design decisions in detail and keeping the documentation efforts to a level tolerable for the project. To this end, we present the notion of High-level Design stories (or HLDs), as small, modular artifacts that record the main design decisions and their context, but also include information about architecture assumptions, quality-attribute analysis, and pending issues for the system. HLDs are intended to be created and refined during the different phases of an architecture-centric development process, and assist in the validation of the decisions (and pending issues) in that process. This way, a global (although detailed) architecture design can be obtained from the combination of the HLDs. In this work, we discuss the pros and cons of using HLDs for design decisions based on experiences from an industrial software project.","PeriodicalId":239999,"journal":{"name":"2019 IEEE International Conference on Software Architecture Companion (ICSA-C)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121043495","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-03-01DOI: 10.1109/ICSA-C.2019.00041
J. Bogner, J. Fritzsch, S. Wagner, A. Zimmermann
Microservices are a topic driven mainly by practitioners and academia is only starting to investigate them. Hence, there is no clear picture of the usage of Microservices in practice. In this paper, we contribute a qualitative study with insights into industry adoption and implementation of Microservices. Contrary to existing quantitative studies, we conducted interviews to gain a more in-depth understanding of the current state of practice. During 17 interviews with software professionals from 10 companies, we analyzed 14 service-based systems. The interviews focused on applied technologies, Microservices characteristics, and the perceived influence on software quality. We found that companies generally rely on well-established technologies for service implementation, communication, and deployment. Most systems, however, did not exhibit a high degree of technological diversity as commonly expected with Microservices. Decentralization and product character were different for systems built for external customers. Applied DevOps practices and automation were still on a mediocre level and only very few companies strictly followed the you build it, you run it principle. The impact of Microservices on software quality was mainly rated as positive. While maintainability received the most positive mentions, some major issues were associated with security. We present a description of each case and summarize the most important findings of companies across different domains and sizes. Researchers may build upon our findings and take them into account when designing industry-focused methods.
{"title":"Microservices in Industry: Insights into Technologies, Characteristics, and Software Quality","authors":"J. Bogner, J. Fritzsch, S. Wagner, A. Zimmermann","doi":"10.1109/ICSA-C.2019.00041","DOIUrl":"https://doi.org/10.1109/ICSA-C.2019.00041","url":null,"abstract":"Microservices are a topic driven mainly by practitioners and academia is only starting to investigate them. Hence, there is no clear picture of the usage of Microservices in practice. In this paper, we contribute a qualitative study with insights into industry adoption and implementation of Microservices. Contrary to existing quantitative studies, we conducted interviews to gain a more in-depth understanding of the current state of practice. During 17 interviews with software professionals from 10 companies, we analyzed 14 service-based systems. The interviews focused on applied technologies, Microservices characteristics, and the perceived influence on software quality. We found that companies generally rely on well-established technologies for service implementation, communication, and deployment. Most systems, however, did not exhibit a high degree of technological diversity as commonly expected with Microservices. Decentralization and product character were different for systems built for external customers. Applied DevOps practices and automation were still on a mediocre level and only very few companies strictly followed the you build it, you run it principle. The impact of Microservices on software quality was mainly rated as positive. While maintainability received the most positive mentions, some major issues were associated with security. We present a description of each case and summarize the most important findings of companies across different domains and sizes. Researchers may build upon our findings and take them into account when designing industry-focused methods.","PeriodicalId":239999,"journal":{"name":"2019 IEEE International Conference on Software Architecture Companion (ICSA-C)","volume":"330 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121252029","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-03-01DOI: 10.1109/ICSA-C.2019.00027
Robert Chatley, T. Field, David Wei
We introduce the notion of performance unit testing which allows developers to explore performance characteristics and detect potential performance problems continuously throughout the development of a software system. Our ideas are embodied in PerfMock, which extends a well-established object mocking framework so that each mock object can be configured with a performance model for predicting the time taken to process each message it receives. PerfMock executes tests in virtual time. This allows performance to be evaluated much more quickly than running a full system performance test, making it possible to test performance continuously, as part of a unit test suite. We demonstrate the core features of PerfMock and show how it can be used to support a process of iterative refinement, whereby models can be improved when more about the actual performance of the objects being mocked becomes known, e.g. by building models from production data. We show that even very simple performance models used early on in the development process can provide useful information for estimating both absolute execution times and the effects of changes in functionality and/or design. The iterative approach we support has the pleasing property that as the system evolves, more decisions are made and more data is collected meaning that we can refine our models, and predicted and actual performance gradually converge.
{"title":"Continuous Performance Testing in Virtual Time","authors":"Robert Chatley, T. Field, David Wei","doi":"10.1109/ICSA-C.2019.00027","DOIUrl":"https://doi.org/10.1109/ICSA-C.2019.00027","url":null,"abstract":"We introduce the notion of performance unit testing which allows developers to explore performance characteristics and detect potential performance problems continuously throughout the development of a software system. Our ideas are embodied in PerfMock, which extends a well-established object mocking framework so that each mock object can be configured with a performance model for predicting the time taken to process each message it receives. PerfMock executes tests in virtual time. This allows performance to be evaluated much more quickly than running a full system performance test, making it possible to test performance continuously, as part of a unit test suite. We demonstrate the core features of PerfMock and show how it can be used to support a process of iterative refinement, whereby models can be improved when more about the actual performance of the objects being mocked becomes known, e.g. by building models from production data. We show that even very simple performance models used early on in the development process can provide useful information for estimating both absolute execution times and the effects of changes in functionality and/or design. The iterative approach we support has the pleasing property that as the system evolves, more decisions are made and more data is collected meaning that we can refine our models, and predicted and actual performance gradually converge.","PeriodicalId":239999,"journal":{"name":"2019 IEEE International Conference on Software Architecture Companion (ICSA-C)","volume":"72 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127269332","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-03-01DOI: 10.1109/ICSA-C.2019.00009
Manuel Gotin, Felix Lösch, Ralf H. Reussner
Cloud-loT Applications consist of thousands of smart devices sending sensor data to processing cloud applications. If the processing rate of the cloud application is limited it may be unable to cope with an increasing number of connected devices. If such a situation is not addressed, the cloud application is overloaded with messages, resulting in a high processing delay or loss of data. For this reason we propose a TCP-inspired congestion avoidance which reconfigures the send rate of devices at runtime aiming for a low processing delay and a high throughput. We show, that it is able to avoid congestions by adapting the send rate of devices to a fair share of the processing rate of the cloud application.
{"title":"TCP-Inspired Congestion Avoidance for Cloud-IoT Applications","authors":"Manuel Gotin, Felix Lösch, Ralf H. Reussner","doi":"10.1109/ICSA-C.2019.00009","DOIUrl":"https://doi.org/10.1109/ICSA-C.2019.00009","url":null,"abstract":"Cloud-loT Applications consist of thousands of smart devices sending sensor data to processing cloud applications. If the processing rate of the cloud application is limited it may be unable to cope with an increasing number of connected devices. If such a situation is not addressed, the cloud application is overloaded with messages, resulting in a high processing delay or loss of data. For this reason we propose a TCP-inspired congestion avoidance which reconfigures the send rate of devices at runtime aiming for a low processing delay and a high throughput. We show, that it is able to avoid congestions by adapting the send rate of devices to a fair share of the processing rate of the cloud application.","PeriodicalId":239999,"journal":{"name":"2019 IEEE International Conference on Software Architecture Companion (ICSA-C)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131685550","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-03-01DOI: 10.1109/ICSA-C.2019.00031
Fabian Gilson, M. Galster, François Georis
Software quality attributes (e.g., security, performance) influence software architecture design decisions, e.g., when choosing technologies, patterns or tactics. As software developers are moving from big upfront design to an evolutionary or emerging design, the architecture of a system evolves as more functionality is added. In agile software development, functional user requirements are often expressed as user stories. Quality attributes might be implicitly referenced in user stories. To support a more systematic analysis and reasoning about quality attributes in agile development projects, this paper explores how to automatically identify quality attributes from user stories. This could help better understand relevant quality attributes (and potential architectural key drivers) before analysing product backlogs and domains in detail and provides the “bigger picture” of potential architectural drivers for early architecture decision making. The goal of this paper is to present our vision and preliminary work towards understanding whether user stories do include information about quality attributes at all, and if so, how we can identify such information in an automated manner.
{"title":"Extracting Quality Attributes from User Stories for Early Architecture Decision Making","authors":"Fabian Gilson, M. Galster, François Georis","doi":"10.1109/ICSA-C.2019.00031","DOIUrl":"https://doi.org/10.1109/ICSA-C.2019.00031","url":null,"abstract":"Software quality attributes (e.g., security, performance) influence software architecture design decisions, e.g., when choosing technologies, patterns or tactics. As software developers are moving from big upfront design to an evolutionary or emerging design, the architecture of a system evolves as more functionality is added. In agile software development, functional user requirements are often expressed as user stories. Quality attributes might be implicitly referenced in user stories. To support a more systematic analysis and reasoning about quality attributes in agile development projects, this paper explores how to automatically identify quality attributes from user stories. This could help better understand relevant quality attributes (and potential architectural key drivers) before analysing product backlogs and domains in detail and provides the “bigger picture” of potential architectural drivers for early architecture decision making. The goal of this paper is to present our vision and preliminary work towards understanding whether user stories do include information about quality attributes at all, and if so, how we can identify such information in an automated manner.","PeriodicalId":239999,"journal":{"name":"2019 IEEE International Conference on Software Architecture Companion (ICSA-C)","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131244291","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-03-01DOI: 10.1109/ICSA-C.2019.00021
Lars Stockmann, S. Laux, E. Bodden
Analyzing runtime behavior is an important part of developing and verifying software systems. This is especially true for complex component-based systems used in the vehicle industry. Here, locating the actual cause of (mis-)behavior can be time-consuming, because the analysis is usually not performed on the architecture level, where the system has initially been designed. Instead, it often relies on source code debugging or visualizing signals and events. The results must then be correlated to what is expected regarding the architecture. With an ever-growing complexity of the systems, the advent of model-based development, code generators and the distributed nature of the development process, this becomes increasingly difficult. This paper therefore presents Architectural Runtime Verification (ARV), a generic approach to analyze system behavior on architecture level using the principles of Runtime Verification. It relies on the architecture description and on the runtime information that is collected in simulation-based tests. This allows an analyst to easily verify or refute hypotheses about system behavior regarding the interaction of components, without the need to inspect the source code. We have instantiated ARV as a framework that allows a client to make queries about architectural elements using a timed LTL-based constraint language. From this, ARV generates a Runtime Verification monitor and applies it to runtime data stored in a database. We demonstrate the applicability of this approach with a running example from the automotive industry.
{"title":"Architectural Runtime Verification","authors":"Lars Stockmann, S. Laux, E. Bodden","doi":"10.1109/ICSA-C.2019.00021","DOIUrl":"https://doi.org/10.1109/ICSA-C.2019.00021","url":null,"abstract":"Analyzing runtime behavior is an important part of developing and verifying software systems. This is especially true for complex component-based systems used in the vehicle industry. Here, locating the actual cause of (mis-)behavior can be time-consuming, because the analysis is usually not performed on the architecture level, where the system has initially been designed. Instead, it often relies on source code debugging or visualizing signals and events. The results must then be correlated to what is expected regarding the architecture. With an ever-growing complexity of the systems, the advent of model-based development, code generators and the distributed nature of the development process, this becomes increasingly difficult. This paper therefore presents Architectural Runtime Verification (ARV), a generic approach to analyze system behavior on architecture level using the principles of Runtime Verification. It relies on the architecture description and on the runtime information that is collected in simulation-based tests. This allows an analyst to easily verify or refute hypotheses about system behavior regarding the interaction of components, without the need to inspect the source code. We have instantiated ARV as a framework that allows a client to make queries about architectural elements using a timed LTL-based constraint language. From this, ARV generates a Runtime Verification monitor and applies it to runtime data stored in a database. We demonstrate the applicability of this approach with a running example from the automotive industry.","PeriodicalId":239999,"journal":{"name":"2019 IEEE International Conference on Software Architecture Companion (ICSA-C)","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127052516","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-03-01DOI: 10.1109/ICSA-C.2019.00043
Marian Daun, Jennifer Brings, Patricia Aluko Obe, Stefanie Weiß, B. Böhm, S. Unverdorben
Smart factories are highly flexible production sites, that can adapt to fulfill a variety of production needs. Particularly, this shall allow for fulfilling production orders that are unknown during the design phase of the factory. To this end, smart factories reconfigure themselves during runtime to allow the execution of new and previously unknown production orders while aiming for an optimal use of resources. In the adaption a variety of factors must be taken into account, not only can the supply chain between different machines be changed to allow performing production steps in a new sequence (as needed by the production order), but the individual machines also can adapt themselves to exhibit different capabilities. A challenging task for such smart factories is the examination whether a certain production order is producible and which configuration is optimal for fulfilling this production order. To determine the optimal configuration among a huge variety of reconfiguration possibilities, many pieces of information must be taken into account. For instance, the capabilities of the machines, their workload, the possible sequences of production steps or constraints such as time and costs. To cope with the complexity of smart factory production planning, we employ a view-based engineering approach for the development of embedded systems. This paper contributes a report of the application of the view-based architecture descriptions to the engineering of a smart factory and illustrates that this approach also has considerable benefits for production planning within a smart factory.
{"title":"Using View-Based Architecture Descriptions to Aid in Automated Runtime Planning for a Smart Factory","authors":"Marian Daun, Jennifer Brings, Patricia Aluko Obe, Stefanie Weiß, B. Böhm, S. Unverdorben","doi":"10.1109/ICSA-C.2019.00043","DOIUrl":"https://doi.org/10.1109/ICSA-C.2019.00043","url":null,"abstract":"Smart factories are highly flexible production sites, that can adapt to fulfill a variety of production needs. Particularly, this shall allow for fulfilling production orders that are unknown during the design phase of the factory. To this end, smart factories reconfigure themselves during runtime to allow the execution of new and previously unknown production orders while aiming for an optimal use of resources. In the adaption a variety of factors must be taken into account, not only can the supply chain between different machines be changed to allow performing production steps in a new sequence (as needed by the production order), but the individual machines also can adapt themselves to exhibit different capabilities. A challenging task for such smart factories is the examination whether a certain production order is producible and which configuration is optimal for fulfilling this production order. To determine the optimal configuration among a huge variety of reconfiguration possibilities, many pieces of information must be taken into account. For instance, the capabilities of the machines, their workload, the possible sequences of production steps or constraints such as time and costs. To cope with the complexity of smart factory production planning, we employ a view-based engineering approach for the development of embedded systems. This paper contributes a report of the application of the view-based architecture descriptions to the engineering of a smart factory and illustrates that this approach also has considerable benefits for production planning within a smart factory.","PeriodicalId":239999,"journal":{"name":"2019 IEEE International Conference on Software Architecture Companion (ICSA-C)","volume":"15 6","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120813819","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-03-01DOI: 10.1109/ICSA-C.2019.00018
A. Bachorek, Felix Schulte-Langforth, Alexander Witton, T. Kuhn, P. Antonino
Recent technological progress in computational engineering and systems design will enable the vision of autonomous driving coming true anytime soon. Functional but particularly also qualitative aspects of automotive functions are therefore gaining in importance more than ever before. This is due to the growing complexity of modern vehicles that gradually evolve into cyber-physical systems giving rise to the increasingly ambitious challenge of reliably validating the non-/functional integration of all their inherent subsystems. Thus, whereas traditional approaches to component and system testing are becoming more and more inappropriate for costs and general viability reasons, simulation-based methodologies offer an adequate solution due to their commonly scalable and generic nature. However, this is only true given a sufficiently high fidelity of the applied simulation models and a straightforward-in-use yet powerful-in-service evaluation platform with flexible support for execution semantics nesting, co-simulator coupling, and interfacing downstream tools with monitoring and visualization capabilities. In this regard, we introduce our concept of a continuous integration platform allowing for virtually prototyping technical systems of any kind that is applicable at any stage of the development process thanks to arbitrary levels of abstraction and wide-range tooling compatibility. This platform is based on the approved FERAL simulation framework at its core combined with versatile architectural components that are adaptable for domain-specific and cross-domain use cases. We focus this work on Advanced Driving Assistance Systems (ADAS) functions and showcase the end-user operation of the instantiated platform from the configuration of traffic scenarios over adjusting the functional logic and parameter values up to the visual validation of simulation results.
{"title":"Towards a Virtual Continuous Integration Platform for Advanced Driving Assistance Systems","authors":"A. Bachorek, Felix Schulte-Langforth, Alexander Witton, T. Kuhn, P. Antonino","doi":"10.1109/ICSA-C.2019.00018","DOIUrl":"https://doi.org/10.1109/ICSA-C.2019.00018","url":null,"abstract":"Recent technological progress in computational engineering and systems design will enable the vision of autonomous driving coming true anytime soon. Functional but particularly also qualitative aspects of automotive functions are therefore gaining in importance more than ever before. This is due to the growing complexity of modern vehicles that gradually evolve into cyber-physical systems giving rise to the increasingly ambitious challenge of reliably validating the non-/functional integration of all their inherent subsystems. Thus, whereas traditional approaches to component and system testing are becoming more and more inappropriate for costs and general viability reasons, simulation-based methodologies offer an adequate solution due to their commonly scalable and generic nature. However, this is only true given a sufficiently high fidelity of the applied simulation models and a straightforward-in-use yet powerful-in-service evaluation platform with flexible support for execution semantics nesting, co-simulator coupling, and interfacing downstream tools with monitoring and visualization capabilities. In this regard, we introduce our concept of a continuous integration platform allowing for virtually prototyping technical systems of any kind that is applicable at any stage of the development process thanks to arbitrary levels of abstraction and wide-range tooling compatibility. This platform is based on the approved FERAL simulation framework at its core combined with versatile architectural components that are adaptable for domain-specific and cross-domain use cases. We focus this work on Advanced Driving Assistance Systems (ADAS) functions and showcase the end-user operation of the instantiated platform from the configuration of traffic scenarios over adjusting the functional logic and parameter values up to the visual validation of simulation results.","PeriodicalId":239999,"journal":{"name":"2019 IEEE International Conference on Software Architecture Companion (ICSA-C)","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115798318","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-03-01DOI: 10.1109/ICSA-C.2019.00033
P. D. Jong, J. V. D. Werf, Marlies van Steenbergen, Floris Bex, Matthieu J. S. Brinkhuis
Although architecture is often seen as the culmination of design decisions, design rationale is a suppositious child in architecture documentation. Many architecture frameworks and standards, like TOGAF and ISO/IEC 42010, recognize the importance, but do not offer any support in the rationale process. Recent initiatives have shown that simple means help in providing more rationale. However, there are very few studies that give evidence whether more rationale indeed leads to better quality. In this paper, we propose a non-invasive method, the Rationale Capture Cycle, that supports architects in capturing rationale during the design process. Through a controlled experiment with 10 experienced architects, we observe the effectiveness of the method in terms of design quality through different measures. The results of our experiments show that: (1) better rationale is strongly correlated with high quality, and (2) the test group with our proposed method outperforms the control group.
{"title":"Evaluating Design Rationale in Architecture","authors":"P. D. Jong, J. V. D. Werf, Marlies van Steenbergen, Floris Bex, Matthieu J. S. Brinkhuis","doi":"10.1109/ICSA-C.2019.00033","DOIUrl":"https://doi.org/10.1109/ICSA-C.2019.00033","url":null,"abstract":"Although architecture is often seen as the culmination of design decisions, design rationale is a suppositious child in architecture documentation. Many architecture frameworks and standards, like TOGAF and ISO/IEC 42010, recognize the importance, but do not offer any support in the rationale process. Recent initiatives have shown that simple means help in providing more rationale. However, there are very few studies that give evidence whether more rationale indeed leads to better quality. In this paper, we propose a non-invasive method, the Rationale Capture Cycle, that supports architects in capturing rationale during the design process. Through a controlled experiment with 10 experienced architects, we observe the effectiveness of the method in terms of design quality through different measures. The results of our experiments show that: (1) better rationale is strongly correlated with high quality, and (2) the test group with our proposed method outperforms the control group.","PeriodicalId":239999,"journal":{"name":"2019 IEEE International Conference on Software Architecture Companion (ICSA-C)","volume":"73 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127126091","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-03-01DOI: 10.1109/ICSA-C.2019.00051
A. Serban
Deploying machine learning algorithms in safety critical systems raises new challenges for system designers. The opaque nature of some algorithms together with the potentially large input space makes reasoning or formally proving safety difficult. In this paper, we argue that the inherent uncertainty that comes from using certain classes of machine learning algorithms can be mitigated through the development of software architecture design patterns. New or adapted patterns will allow faster roll out time for new technologies and decrease the negative impact machine learning components can have on safety critical systems. We outline the important safety challenges that machine learning algorithms raise and define three important directions for the development of new architectural patterns.
{"title":"Designing Safety Critical Software Systems to Manage Inherent Uncertainty","authors":"A. Serban","doi":"10.1109/ICSA-C.2019.00051","DOIUrl":"https://doi.org/10.1109/ICSA-C.2019.00051","url":null,"abstract":"Deploying machine learning algorithms in safety critical systems raises new challenges for system designers. The opaque nature of some algorithms together with the potentially large input space makes reasoning or formally proving safety difficult. In this paper, we argue that the inherent uncertainty that comes from using certain classes of machine learning algorithms can be mitigated through the development of software architecture design patterns. New or adapted patterns will allow faster roll out time for new technologies and decrease the negative impact machine learning components can have on safety critical systems. We outline the important safety challenges that machine learning algorithms raise and define three important directions for the development of new architectural patterns.","PeriodicalId":239999,"journal":{"name":"2019 IEEE International Conference on Software Architecture Companion (ICSA-C)","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123167594","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}