Pub Date : 2013-06-19DOI: 10.1109/ISORC.2013.6913215
D. Müller, Matthias Werner
Contrary to the optimal scheduling algorithm Earliest Deadline First (EDF), Rate-Monotonic Scheduling (RMS) can lead to non-schedulable task sets for total utilizations below 1 on a uniprocessor. The quantification of this deficiency has been a topic in real-time science for a long time. We show weaknesses of the scheduling algorithm metrics breakdown utilization, utilization upper bound, and numerical optimality degree. Finally, we suggest a new measure of schedulability called Efficiency and calculate its bounds. It turns out that numerical optimality degree might be too optimistic depending on the assumed total utilization distribution. The main results are the application of a power-law total utilization distribution to quantify the RMS-to-EDF Efficiency and a step-by-step derived lower bound of this Efficiency. We apply a differential analysis of schedulability.
{"title":"Quantifying the advantage of EDF vs. RMS schedulability on a uniprocessor using a differential analysis and a power-law total utilization distribution","authors":"D. Müller, Matthias Werner","doi":"10.1109/ISORC.2013.6913215","DOIUrl":"https://doi.org/10.1109/ISORC.2013.6913215","url":null,"abstract":"Contrary to the optimal scheduling algorithm Earliest Deadline First (EDF), Rate-Monotonic Scheduling (RMS) can lead to non-schedulable task sets for total utilizations below 1 on a uniprocessor. The quantification of this deficiency has been a topic in real-time science for a long time. We show weaknesses of the scheduling algorithm metrics breakdown utilization, utilization upper bound, and numerical optimality degree. Finally, we suggest a new measure of schedulability called Efficiency and calculate its bounds. It turns out that numerical optimality degree might be too optimistic depending on the assumed total utilization distribution. The main results are the application of a power-law total utilization distribution to quantify the RMS-to-EDF Efficiency and a step-by-step derived lower bound of this Efficiency. We apply a differential analysis of schedulability.","PeriodicalId":330873,"journal":{"name":"16th IEEE International Symposium on Object/component/service-oriented Real-time distributed Computing (ISORC 2013)","volume":"130 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121292153","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-06-19DOI: 10.1109/ISORC.2013.6913233
M. A. Wehrmeister, G. Berkenbrock
To cope with the increasing design complexity of modern embedded real-time systems, Model-Driven Engineering (MDE) techniques are being proposed and applied within the domain of embedded and real-time systems. This paper discusses a design approach that combines MDE and concepts of the Aspect-Oriented Software Development (AOSD) to deal with functional and non-functional requirements in a modularized way using higher abstraction levels. This approach covers activities from requirements engineering to the implementation phases, allowing early verification and simulation of system specifications. The proposed MDE approach is supported by a set of CASE tools. A configurable tool for code generation is capable of creating source code for different target platforms from the models produced in earlier design phases. Besides generating code for functional requirements handling, the tool also weaves aspects' adaptations, which modify the generated code to handle non-functional requirements. Furthermore, a tool to execute automatically a set of test cases is used to simulate and exercise the system behavior already in the specification and modeling phase. This tools allows engineers to verify if the system model is being specified according to the requirements, identifying whether the functional requirements are being fulfilled. The proposed approach has been successfully applied to the development of embedded real-time systems for different real-world applications. Obtained results show an improvement concerning the modularization of system's requirements handling, leading to an increased reuse of previously created artifacts.
{"title":"AMoDE-RT: Advancing Model-Driven Engineering for embedded real-time systems","authors":"M. A. Wehrmeister, G. Berkenbrock","doi":"10.1109/ISORC.2013.6913233","DOIUrl":"https://doi.org/10.1109/ISORC.2013.6913233","url":null,"abstract":"To cope with the increasing design complexity of modern embedded real-time systems, Model-Driven Engineering (MDE) techniques are being proposed and applied within the domain of embedded and real-time systems. This paper discusses a design approach that combines MDE and concepts of the Aspect-Oriented Software Development (AOSD) to deal with functional and non-functional requirements in a modularized way using higher abstraction levels. This approach covers activities from requirements engineering to the implementation phases, allowing early verification and simulation of system specifications. The proposed MDE approach is supported by a set of CASE tools. A configurable tool for code generation is capable of creating source code for different target platforms from the models produced in earlier design phases. Besides generating code for functional requirements handling, the tool also weaves aspects' adaptations, which modify the generated code to handle non-functional requirements. Furthermore, a tool to execute automatically a set of test cases is used to simulate and exercise the system behavior already in the specification and modeling phase. This tools allows engineers to verify if the system model is being specified according to the requirements, identifying whether the functional requirements are being fulfilled. The proposed approach has been successfully applied to the development of embedded real-time systems for different real-world applications. Obtained results show an improvement concerning the modularization of system's requirements handling, leading to an increased reuse of previously created artifacts.","PeriodicalId":330873,"journal":{"name":"16th IEEE International Symposium on Object/component/service-oriented Real-time distributed Computing (ISORC 2013)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123361389","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-06-19DOI: 10.1109/ISORC.2013.6913190
D. Doering, C. Pereira, P. Denes, J. Joseph
High-speed scientific cameras have been demanding more from their control systems as the number of pixels, and number of frame increases and therefore the required total bandwidth. One way to cope with this demand is to perform realtime image processing. The challenge on that is the fact that each experiment requires a different processing algorithms and one needs to reconfigure it frequently. An example of this system is the LBNL high-speed cameras based on FPGAs used on X-rays and electron microscopy experiments. These camera systems can benefit from modern design methodologies that explore higher abstraction level modeling, which includes both functional and non-functional requirements specification and that take advantage of techniques such as object-oriented and aspect-oriented methodologies. This paper introduces HIPAO, a Hardware Image Processing system based on model driven engineering and Aspect-Oriented modeling. Some examples are shown for each step of the methodology that goes from requirements modeling to automatic code generation.
{"title":"A model driven engineering approach based on aspects for high speed scientific X-rays cameras","authors":"D. Doering, C. Pereira, P. Denes, J. Joseph","doi":"10.1109/ISORC.2013.6913190","DOIUrl":"https://doi.org/10.1109/ISORC.2013.6913190","url":null,"abstract":"High-speed scientific cameras have been demanding more from their control systems as the number of pixels, and number of frame increases and therefore the required total bandwidth. One way to cope with this demand is to perform realtime image processing. The challenge on that is the fact that each experiment requires a different processing algorithms and one needs to reconfigure it frequently. An example of this system is the LBNL high-speed cameras based on FPGAs used on X-rays and electron microscopy experiments. These camera systems can benefit from modern design methodologies that explore higher abstraction level modeling, which includes both functional and non-functional requirements specification and that take advantage of techniques such as object-oriented and aspect-oriented methodologies. This paper introduces HIPAO, a Hardware Image Processing system based on model driven engineering and Aspect-Oriented modeling. Some examples are shown for each step of the methodology that goes from requirements modeling to automatic code generation.","PeriodicalId":330873,"journal":{"name":"16th IEEE International Symposium on Object/component/service-oriented Real-time distributed Computing (ISORC 2013)","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121670938","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-06-19DOI: 10.1109/ISORC.2013.6913224
Edgars Lakis, Martin Schoeberl
For real-time systems we need to statically determine worst-case execution times (WCET) of tasks to proof the schedulability of the system. To enable static WCET analysis, the platform needs to be time-predictable. The platform includes the processor, the caches, the memory system, the operating system, and the application software itself. All those components need to be timing analyzable. Current computers use DRAM as a cost effective main memory. However, these DRAM chips have timing requirements that depend on former accesses and also need to be refreshed to retain their content. Standard memory controllers for DRAM memories are optimized to provide maximum bandwidth or throughput at the cost of variable latency for individual memory accesses. In this paper we present an SDRAM controller for realtime systems. The controller is optimized for the worst case and constant latency to provide a base of the memory hierarchy for time-predictable systems.
{"title":"An SDRAM controller for real-time systems","authors":"Edgars Lakis, Martin Schoeberl","doi":"10.1109/ISORC.2013.6913224","DOIUrl":"https://doi.org/10.1109/ISORC.2013.6913224","url":null,"abstract":"For real-time systems we need to statically determine worst-case execution times (WCET) of tasks to proof the schedulability of the system. To enable static WCET analysis, the platform needs to be time-predictable. The platform includes the processor, the caches, the memory system, the operating system, and the application software itself. All those components need to be timing analyzable. Current computers use DRAM as a cost effective main memory. However, these DRAM chips have timing requirements that depend on former accesses and also need to be refreshed to retain their content. Standard memory controllers for DRAM memories are optimized to provide maximum bandwidth or throughput at the cost of variable latency for individual memory accesses. In this paper we present an SDRAM controller for realtime systems. The controller is optimized for the worst case and constant latency to provide a base of the memory hierarchy for time-predictable systems.","PeriodicalId":330873,"journal":{"name":"16th IEEE International Symposium on Object/component/service-oriented Real-time distributed Computing (ISORC 2013)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128325674","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-06-19DOI: 10.1109/ISORC.2013.6913214
Yassine Ouhammou, E. Grolleau, J. Hugues
To fill the gap between the modeling of real-time systems and the scheduling analysis, we propose a framework that supports seamlessly the two aspects: (1) modeling a system using a methodology, in our case study, the Architecture Analysis and Design Language (AADL), and (2) helping to easily check temporal requirements (schedulability analysis, worst-case response time, sensitivity analysis, etc.). We introduce the usefulness of an intermediate framework called MoSaRT, which supports a rich semantic concerning temporal analysis. We show with a case study how the input model is transformed into a MoSaRT model, and how our framework is able to generate the proper models as inputs to several classic temporal analysis tools.
{"title":"Mapping AADL models to a repository of multiple schedulability analysis techniques","authors":"Yassine Ouhammou, E. Grolleau, J. Hugues","doi":"10.1109/ISORC.2013.6913214","DOIUrl":"https://doi.org/10.1109/ISORC.2013.6913214","url":null,"abstract":"To fill the gap between the modeling of real-time systems and the scheduling analysis, we propose a framework that supports seamlessly the two aspects: (1) modeling a system using a methodology, in our case study, the Architecture Analysis and Design Language (AADL), and (2) helping to easily check temporal requirements (schedulability analysis, worst-case response time, sensitivity analysis, etc.). We introduce the usefulness of an intermediate framework called MoSaRT, which supports a rich semantic concerning temporal analysis. We show with a case study how the input model is transformed into a MoSaRT model, and how our framework is able to generate the proper models as inputs to several classic temporal analysis tools.","PeriodicalId":330873,"journal":{"name":"16th IEEE International Symposium on Object/component/service-oriented Real-time distributed Computing (ISORC 2013)","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133918575","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-06-19DOI: 10.1109/ISORC.2013.6913206
Anthony Sargeant, P. Townend, Jie Xu, K. Djemame
Service-Oriented Computing (SOC) provides a flexible framework in which applications may be built up from services, often distributed across a network. One of the promises of SOC is that of Dynamic Binding where abstract consumer requests are bound to concrete service instances at runtime, thereby offering a high level of flexibility and adaptability. Existing research has so far focused mostly on the design and implementation of dynamic binding operations and there is little research into a comprehensive evaluation of dynamic binding systems, especially in terms of system failure and dependability. In this paper, we present a novel, extensible evaluation framework that allows for the testing and assessment of a Dynamic Binding System (DBS). Based on a fault model specially built for DBS's, we are able to insert selectively the types of fault that would affect a DBS and observe its behavior. By treating the DBS as a black box and distributing the components of the evaluation framework we are not restricted to the implementing technologies of the DBS, nor do we need to be co-located in the same environment as the DBS under test. We present the results of a series of experiments, with a focus on the interactions between a real-life DBS and the services it employs. The results on the NECTISE Software Demonstrator (NSD) system show that our proposed method and testing framework is able to trigger abnormal behavior of the NSD due to interaction faults and generate important information for improving both dependability and performance of the system under test.
{"title":"An evaluation framework for assessing the dependability of Dynamic Binding in Service-Oriented Computing","authors":"Anthony Sargeant, P. Townend, Jie Xu, K. Djemame","doi":"10.1109/ISORC.2013.6913206","DOIUrl":"https://doi.org/10.1109/ISORC.2013.6913206","url":null,"abstract":"Service-Oriented Computing (SOC) provides a flexible framework in which applications may be built up from services, often distributed across a network. One of the promises of SOC is that of Dynamic Binding where abstract consumer requests are bound to concrete service instances at runtime, thereby offering a high level of flexibility and adaptability. Existing research has so far focused mostly on the design and implementation of dynamic binding operations and there is little research into a comprehensive evaluation of dynamic binding systems, especially in terms of system failure and dependability. In this paper, we present a novel, extensible evaluation framework that allows for the testing and assessment of a Dynamic Binding System (DBS). Based on a fault model specially built for DBS's, we are able to insert selectively the types of fault that would affect a DBS and observe its behavior. By treating the DBS as a black box and distributing the components of the evaluation framework we are not restricted to the implementing technologies of the DBS, nor do we need to be co-located in the same environment as the DBS under test. We present the results of a series of experiments, with a focus on the interactions between a real-life DBS and the services it employs. The results on the NECTISE Software Demonstrator (NSD) system show that our proposed method and testing framework is able to trigger abnormal behavior of the NSD due to interaction faults and generate important information for improving both dependability and performance of the system under test.","PeriodicalId":330873,"journal":{"name":"16th IEEE International Symposium on Object/component/service-oriented Real-time distributed Computing (ISORC 2013)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123866465","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-06-19DOI: 10.1109/ISORC.2013.6913196
Tobias Stumpf, Matthias Werner
Software is getting larger and more complex. To simplify programming, languages with automatic memory management are used. In embedded systems the languages C/C++ are commonly used, which do not provide such functionalities. This paper addresses real-time garbage collection for embedded systems. We developed a conservative collector, which supports C/C++. The collector uses the mark-sweep algorithm and an installation barrier. Synchronisation points ensure termination. Memory fragmentation is avoided by memory partitioning. Compared to existing approaches, our collector provides realtime support without restrictions on the compiler, programming language or to need special hardware. We only use common hardware functionalities normally used by an operating system to avoid compiler modifications and increase performance.
{"title":"A conservative real-time garbage collector for C/C++ running on top of RTEMS","authors":"Tobias Stumpf, Matthias Werner","doi":"10.1109/ISORC.2013.6913196","DOIUrl":"https://doi.org/10.1109/ISORC.2013.6913196","url":null,"abstract":"Software is getting larger and more complex. To simplify programming, languages with automatic memory management are used. In embedded systems the languages C/C++ are commonly used, which do not provide such functionalities. This paper addresses real-time garbage collection for embedded systems. We developed a conservative collector, which supports C/C++. The collector uses the mark-sweep algorithm and an installation barrier. Synchronisation points ensure termination. Memory fragmentation is avoided by memory partitioning. Compared to existing approaches, our collector provides realtime support without restrictions on the compiler, programming language or to need special hardware. We only use common hardware functionalities normally used by an operating system to avoid compiler modifications and increase performance.","PeriodicalId":330873,"journal":{"name":"16th IEEE International Symposium on Object/component/service-oriented Real-time distributed Computing (ISORC 2013)","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129180365","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-06-19DOI: 10.1109/ISORC.2013.6913232
M. Happe, F. Heide, Peter Kling, M. Platzner, Christian Plessl
In this paper we introduce “On-The-Fly Computing”, our vision of future IT services that will be provided by assembling modular software components available on world-wide markets. After suitable components have been found, they are automatically integrated, configured and brought to execution in an On-The-Fly Compute Center. We envision that these future compute centers will continue to leverage three current trends in large scale computing which are an increasing amount of parallel processing, a trend to use heterogeneous computing resources, and - in the light of rising energy cost - energy-efficiency as a primary goal in the design and operation of computing systems. In this paper, we point out three research challenges and our current work in these areas.
{"title":"On-The-Fly Computing: A novel paradigm for individualized IT services","authors":"M. Happe, F. Heide, Peter Kling, M. Platzner, Christian Plessl","doi":"10.1109/ISORC.2013.6913232","DOIUrl":"https://doi.org/10.1109/ISORC.2013.6913232","url":null,"abstract":"In this paper we introduce “On-The-Fly Computing”, our vision of future IT services that will be provided by assembling modular software components available on world-wide markets. After suitable components have been found, they are automatically integrated, configured and brought to execution in an On-The-Fly Compute Center. We envision that these future compute centers will continue to leverage three current trends in large scale computing which are an increasing amount of parallel processing, a trend to use heterogeneous computing resources, and - in the light of rising energy cost - energy-efficiency as a primary goal in the design and operation of computing systems. In this paper, we point out three research challenges and our current work in these areas.","PeriodicalId":330873,"journal":{"name":"16th IEEE International Symposium on Object/component/service-oriented Real-time distributed Computing (ISORC 2013)","volume":"96 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127104139","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-06-19DOI: 10.1109/ISORC.2013.6913236
Claudia Priesterjahn, Christian Heinzemann, Wilhelm Schäfer
Embedded real-time systems are increasingly applied in safety-critical environments like cars or aircrafts. Even though the system design might be free from flaws, hazardous situations may still be caused at run-time by random faults due to the wear of physical components. Hazard analysis is based on fault trees or failure propagation models. These models are created at least partly manually. They are usually independent from the software models which are used for checking safety and liveness properties to avoid systematic faults. This is particularly bad in cases, where the software model contains manually specified operations to deal with random faults which have been identified by hazard analysis. These operations include replacing the faulty components by reconfiguration. We propose to generate a failure propagation model automatically from the software model to check whether the results of hazard analysis have been properly accounted in the specification of reconfiguration operations. In contrast to other approaches, our approach considers the real-time properties of the system and adds explicit failure propagation times based on using timed automata for model specification.
{"title":"From timed automata to timed failure propagation graphs","authors":"Claudia Priesterjahn, Christian Heinzemann, Wilhelm Schäfer","doi":"10.1109/ISORC.2013.6913236","DOIUrl":"https://doi.org/10.1109/ISORC.2013.6913236","url":null,"abstract":"Embedded real-time systems are increasingly applied in safety-critical environments like cars or aircrafts. Even though the system design might be free from flaws, hazardous situations may still be caused at run-time by random faults due to the wear of physical components. Hazard analysis is based on fault trees or failure propagation models. These models are created at least partly manually. They are usually independent from the software models which are used for checking safety and liveness properties to avoid systematic faults. This is particularly bad in cases, where the software model contains manually specified operations to deal with random faults which have been identified by hazard analysis. These operations include replacing the faulty components by reconfiguration. We propose to generate a failure propagation model automatically from the software model to check whether the results of hazard analysis have been properly accounted in the specification of reconfiguration operations. In contrast to other approaches, our approach considers the real-time properties of the system and adds explicit failure propagation times based on using timed automata for model specification.","PeriodicalId":330873,"journal":{"name":"16th IEEE International Symposium on Object/component/service-oriented Real-time distributed Computing (ISORC 2013)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132182205","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-06-19DOI: 10.1109/ISORC.2013.6913237
Leonardo Montecchi, A. Ceccarelli, P. Lollini, A. Bondavalli
Highly distributed, autonomous and self-powered systems operating in harsh, outdoors environments face several threats in terms of dependability, timeliness and security, due to the challenging operating conditions determined by the environment. Despite such difficulties, there is an increasing demand to deploy these systems to support critical services, thus calling for severe timeliness, safety, and security requirements. Several challenges need to be faced and overcome. First, the designed architecture must be able to cope with the environmental challenges and satisfy dependability, timeliness and security requirements. Second, the assessment of the system must be carried on despite potentially incomplete field-data, and complex cascading effects that small modifications in system properties and operating conditions may have on the targeted metrics. In this paper we present our experience from the EU-funded project ALARP (A railway automatic track warning system based on distributed personal mobile terminals), which aims to build and validate a distributed, real-time, safety-critical system that detects trains approaching a railway worksite and notifies their arrivals to railway trackside workers. The paper describes the challenges we faced, and the solutions we adopted, when architecting and evaluating the ALARP system.
{"title":"Meeting the challenges in the design and evaluation of a trackside real-time safety-critical system","authors":"Leonardo Montecchi, A. Ceccarelli, P. Lollini, A. Bondavalli","doi":"10.1109/ISORC.2013.6913237","DOIUrl":"https://doi.org/10.1109/ISORC.2013.6913237","url":null,"abstract":"Highly distributed, autonomous and self-powered systems operating in harsh, outdoors environments face several threats in terms of dependability, timeliness and security, due to the challenging operating conditions determined by the environment. Despite such difficulties, there is an increasing demand to deploy these systems to support critical services, thus calling for severe timeliness, safety, and security requirements. Several challenges need to be faced and overcome. First, the designed architecture must be able to cope with the environmental challenges and satisfy dependability, timeliness and security requirements. Second, the assessment of the system must be carried on despite potentially incomplete field-data, and complex cascading effects that small modifications in system properties and operating conditions may have on the targeted metrics. In this paper we present our experience from the EU-funded project ALARP (A railway automatic track warning system based on distributed personal mobile terminals), which aims to build and validate a distributed, real-time, safety-critical system that detects trains approaching a railway worksite and notifies their arrivals to railway trackside workers. The paper describes the challenges we faced, and the solutions we adopted, when architecting and evaluating the ALARP system.","PeriodicalId":330873,"journal":{"name":"16th IEEE International Symposium on Object/component/service-oriented Real-time distributed Computing (ISORC 2013)","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133454394","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}