In this paper, voltage scaling strategies for scheduling aperiodic tasks under average delay constraints are studied. Dynamic voltage scaling in single processor systems is formulated as a constrained stochastic optimization problem for which the optimal solution can be obtained using a combination of Lagrange relaxation and the value iteration method. For multiprocessor systems, we present a two-phase approach. In the first phase, the speed settings and static workload distribution of the processors are optimized to minimize the total power dissipation. Dynamic voltage scaling techniques are then applied to each individual processor in the second phase. Both homogeneous and heterogeneous systems have been investigated. Based on queueing theory, the proposed algorithms guarantee conformity to the average delay constraint. Moreover, our simulation experiments have shown they are effective for minimizing power consumption.
{"title":"Power-aware processor scheduling under average delay constraints","authors":"Fan Zhang, S. Chanson","doi":"10.1109/RTAS.2005.39","DOIUrl":"https://doi.org/10.1109/RTAS.2005.39","url":null,"abstract":"In this paper, voltage scaling strategies for scheduling aperiodic tasks under average delay constraints are studied. Dynamic voltage scaling in single processor systems is formulated as a constrained stochastic optimization problem for which the optimal solution can be obtained using a combination of Lagrange relaxation and the value iteration method. For multiprocessor systems, we present a two-phase approach. In the first phase, the speed settings and static workload distribution of the processors are optimized to minimize the total power dissipation. Dynamic voltage scaling techniques are then applied to each individual processor in the second phase. Both homogeneous and heterogeneous systems have been investigated. Based on queueing theory, the proposed algorithms guarantee conformity to the average delay constraint. Moreover, our simulation experiments have shown they are effective for minimizing power consumption.","PeriodicalId":291045,"journal":{"name":"11th IEEE Real Time and Embedded Technology and Applications Symposium","volume":"218 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123252515","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The memory model used in the real-time specification for Java (RTSJ) imposes strict assignment rules to or from memory areas preventing the creation of dangling pointers, and thus maintaining the pointer safety of Java. An implementation solution to ensure the checking of these rules before each assignment statement consists of performing it dynamically by using write barriers. This solution adversely affects both the performance and predictability of the RTSJ application. In this paper we present an efficient algorithm for managing scoped regions which requires some modifications in the current RTSJ specification.
{"title":"Towards an understanding of the behavior of the single parent rule in the RTSJ scoped memory model","authors":"M. T. Higuera-Toledano","doi":"10.1109/RTAS.2005.56","DOIUrl":"https://doi.org/10.1109/RTAS.2005.56","url":null,"abstract":"The memory model used in the real-time specification for Java (RTSJ) imposes strict assignment rules to or from memory areas preventing the creation of dangling pointers, and thus maintaining the pointer safety of Java. An implementation solution to ensure the checking of these rules before each assignment statement consists of performing it dynamically by using write barriers. This solution adversely affects both the performance and predictability of the RTSJ application. In this paper we present an efficient algorithm for managing scoped regions which requires some modifications in the current RTSJ specification.","PeriodicalId":291045,"journal":{"name":"11th IEEE Real Time and Embedded Technology and Applications Symposium","volume":"2 1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123541990","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Dynamic voltage and frequency scaling is increasingly being used to reduce the energy requirements of embedded and real-time applications by exploiting idle CPU resources, while still maintaining all application's real-time characteristics. Accurate predictions of task run-times are key to computing the frequencies and voltages that ensure that all tasks' real-time constraints are met. Past work has used feedback-based approaches, where applications' past CPU utilizations are used to predict future CPU requirements. Mispredictions in these approaches can lead to missed deadlines, suboptimal energy savings, or large overheads due to frequent changes to the chosen frequency or voltage. One shortcoming of previous approaches is that they ignore other 'indicators' of future CPU requirements, such as the frequency of I/O operations, memory accesses, or interrupts. This paper addresses the energy consumptions of memory-bound real-time applications via a feedback loop approach, based on measured task run-times and cache miss rates. Using cache miss rates as indicator for memory access rates introduces a more reliable predictor of future task run-times. Even in modern processor architectures, memory latencies can only be hidden partially, therefore, cache misses can be used to improve the run-time predictions by considering potential memory latencies. The results shown in this paper indicate improvements in both the number of deadlines met and the amount of energy saved.
{"title":"Feedback-based dynamic voltage and frequency scaling for memory-bound real-time applications","authors":"C. Poellabauer, Leo Singleton, K. Schwan","doi":"10.1109/RTAS.2005.23","DOIUrl":"https://doi.org/10.1109/RTAS.2005.23","url":null,"abstract":"Dynamic voltage and frequency scaling is increasingly being used to reduce the energy requirements of embedded and real-time applications by exploiting idle CPU resources, while still maintaining all application's real-time characteristics. Accurate predictions of task run-times are key to computing the frequencies and voltages that ensure that all tasks' real-time constraints are met. Past work has used feedback-based approaches, where applications' past CPU utilizations are used to predict future CPU requirements. Mispredictions in these approaches can lead to missed deadlines, suboptimal energy savings, or large overheads due to frequent changes to the chosen frequency or voltage. One shortcoming of previous approaches is that they ignore other 'indicators' of future CPU requirements, such as the frequency of I/O operations, memory accesses, or interrupts. This paper addresses the energy consumptions of memory-bound real-time applications via a feedback loop approach, based on measured task run-times and cache miss rates. Using cache miss rates as indicator for memory access rates introduces a more reliable predictor of future task run-times. Even in modern processor architectures, memory latencies can only be hidden partially, therefore, cache misses can be used to improve the run-time predictions by considering potential memory latencies. The results shown in this paper indicate improvements in both the number of deadlines met and the amount of energy saved.","PeriodicalId":291045,"journal":{"name":"11th IEEE Real Time and Embedded Technology and Applications Symposium","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129897570","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yuanfang Zhang, B. Thrall, Stephen Torri, C. Gill, Chenyang Lu
No one middleware communication model completely solves the problem of ensuring schedulability in every DRE system. Furthermore, there have been few studies to date of the trade-offs between alternative middleware communication models under different application scenarios. This paper makes three contributions to the state of the art in middleware for distributed real-time and embedded systems. First, it describes what we believe is the first example of integrating release guards directly with CORBA distributable threads to ensure appropriate release times for sub-tasks along an end-to-end computation. Second, it presents empirical results in which release guards improve schedulability of distributable threads compared to a greedy protocol in which arriving tasks simply begin to run as soon as they can. Third, we offer the first empirical comparisons of the distributable thread and event channel models under three different communication scenarios and then using a randomized workload.
{"title":"A real-time performance comparison of distributable threads and event channels","authors":"Yuanfang Zhang, B. Thrall, Stephen Torri, C. Gill, Chenyang Lu","doi":"10.1109/RTAS.2005.5","DOIUrl":"https://doi.org/10.1109/RTAS.2005.5","url":null,"abstract":"No one middleware communication model completely solves the problem of ensuring schedulability in every DRE system. Furthermore, there have been few studies to date of the trade-offs between alternative middleware communication models under different application scenarios. This paper makes three contributions to the state of the art in middleware for distributed real-time and embedded systems. First, it describes what we believe is the first example of integrating release guards directly with CORBA distributable threads to ensure appropriate release times for sub-tasks along an end-to-end computation. Second, it presents empirical results in which release guards improve schedulability of distributable threads compared to a greedy protocol in which arriving tasks simply begin to run as soon as they can. Third, we offer the first empirical comparisons of the distributable thread and event channel models under three different communication scenarios and then using a randomized workload.","PeriodicalId":291045,"journal":{"name":"11th IEEE Real Time and Embedded Technology and Applications Symposium","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129568861","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sourav Ghosh, R. Rajkumar, Jeffery P. Hansen, J. Lehoczky
In this paper, we study the problem of allocating end-to-end bandwidth to each of multiple traffic flows in a large-scale network. We adopt the QoS-based resource allocation model (Q-RAM) (K-S. Lui et al., 2000), whereby each flow derives an utility based on the amount of its allocated bandwidth. Our goal therefore is to maximize the total utility derived across all network flows. The NP-hard nature of the resource allocation problem is compounded by the need to select an appropriate path between each source-destination pair. We propose a hierarchical decomposition scheme that allows the resource allocation problem to be solved in a decentralized and scalable fashion. The hierarchy we use is based on a (natural) partitioning of the network into subnets, with resource allocation decisions made on a subnet-by-subnet basis. A novel distributed transaction scheme is used to ensure that resource allocations are consistent across all the subnets traversed by each flow. We provide both analytical and experimental evidence to show that our scheme is very scalable and yet does not sacrifice the quality of the allocations.
在本文中,我们研究了大规模网络中多个流量的端到端带宽分配问题。采用基于qos的资源分配模型(Q-RAM) (K-S)。Lui et al., 2000),其中每个流根据其分配的带宽量派生出一个实用程序。因此,我们的目标是最大化所有网络流的总效用。资源分配问题的np困难特性由于需要在每个源-目标对之间选择适当的路径而变得更加复杂。我们提出了一种分层分解方案,允许以分散和可扩展的方式解决资源分配问题。我们使用的层次结构是基于将网络(自然地)划分为子网,并在逐个子网的基础上做出资源分配决策。使用一种新的分布式事务方案来确保资源分配在每个流所穿越的所有子网之间是一致的。我们提供了分析和实验证据,表明我们的方案是非常可扩展的,但不牺牲分配的质量。
{"title":"Scalable QoS-based resource allocation in hierarchical networked environment","authors":"Sourav Ghosh, R. Rajkumar, Jeffery P. Hansen, J. Lehoczky","doi":"10.1109/RTAS.2005.47","DOIUrl":"https://doi.org/10.1109/RTAS.2005.47","url":null,"abstract":"In this paper, we study the problem of allocating end-to-end bandwidth to each of multiple traffic flows in a large-scale network. We adopt the QoS-based resource allocation model (Q-RAM) (K-S. Lui et al., 2000), whereby each flow derives an utility based on the amount of its allocated bandwidth. Our goal therefore is to maximize the total utility derived across all network flows. The NP-hard nature of the resource allocation problem is compounded by the need to select an appropriate path between each source-destination pair. We propose a hierarchical decomposition scheme that allows the resource allocation problem to be solved in a decentralized and scalable fashion. The hierarchy we use is based on a (natural) partitioning of the network into subnets, with resource allocation decisions made on a subnet-by-subnet basis. A novel distributed transaction scheme is used to ensure that resource allocations are consistent across all the subnets traversed by each flow. We provide both analytical and experimental evidence to show that our scheme is very scalable and yet does not sacrifice the quality of the allocations.","PeriodicalId":291045,"journal":{"name":"11th IEEE Real Time and Embedded Technology and Applications Symposium","volume":"58 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132182314","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In the control of continuous and physical systems, the controlled system is sampled sufficiently fast to capture the system dynamics. In general, this property cannot be applied to the control of computer systems as the measured variables are often computed over a data set, e.g., deadline miss ratio. In this paper we quantize the disturbance present in the measured variable as a function of the sampling period and we propose a measurement disturbance suppressive control structure. The experiments we have carried out show that a controller using the proposed control structure outperforms a traditional control structure with regard to performance reliability and adaptation.
{"title":"Enhancing feedback control scheduling performance by on-line quantification and suppression of measurement disturbance","authors":"M. Amirijoo, J. Hansson, S. Gunnarsson, S. Son","doi":"10.1109/RTAS.2005.21","DOIUrl":"https://doi.org/10.1109/RTAS.2005.21","url":null,"abstract":"In the control of continuous and physical systems, the controlled system is sampled sufficiently fast to capture the system dynamics. In general, this property cannot be applied to the control of computer systems as the measured variables are often computed over a data set, e.g., deadline miss ratio. In this paper we quantize the disturbance present in the measured variable as a function of the sampling period and we propose a measurement disturbance suppressive control structure. The experiments we have carried out show that a controller using the proposed control structure outperforms a traditional control structure with regard to performance reliability and adaptation.","PeriodicalId":291045,"journal":{"name":"11th IEEE Real Time and Embedded Technology and Applications Symposium","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129130902","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hui Ma, Dongfeng Wang, F. Bastani, I. Yen, K. Cooper
Component-based development (CBD) techniques have been widely used to enhance the productivity and reduce the cost for software systems development. However, applying CBD techniques to embedded software development faces additional challenges. For embedded systems, it is crucial to consider the quality of service (QoS) attributes, such as timeliness, memory limitations, output precision, battery constraints. Frequently, multiple components implementing the same functionality with different QoS properties can be used to compose a system. Also, software components may have parameters that can be configured to satisfy different QoS requirements. Composition analysis, which is used to determine the most suitable component selections and parameter settings to best satisfy the system QoS requirement, is very important in embedded software development process. In this paper, we present a model and the methodologies to facilitate composition analysis. We define QoS requirements as constraints and objectives. Composition analysis is performed based on the QoS properties and requirements to find solutions (component selections and parameter settings) that can optimize the QoS objectives while satisfying the QoS constraints. We use a multiobjective concept to model the composition analysis problem and use an evolutionary algorithm to determine the Pareto-optimal solutions efficiently.
{"title":"A model and methodology for composition QoS analysis of embedded systems","authors":"Hui Ma, Dongfeng Wang, F. Bastani, I. Yen, K. Cooper","doi":"10.1109/RTAS.2005.2","DOIUrl":"https://doi.org/10.1109/RTAS.2005.2","url":null,"abstract":"Component-based development (CBD) techniques have been widely used to enhance the productivity and reduce the cost for software systems development. However, applying CBD techniques to embedded software development faces additional challenges. For embedded systems, it is crucial to consider the quality of service (QoS) attributes, such as timeliness, memory limitations, output precision, battery constraints. Frequently, multiple components implementing the same functionality with different QoS properties can be used to compose a system. Also, software components may have parameters that can be configured to satisfy different QoS requirements. Composition analysis, which is used to determine the most suitable component selections and parameter settings to best satisfy the system QoS requirement, is very important in embedded software development process. In this paper, we present a model and the methodologies to facilitate composition analysis. We define QoS requirements as constraints and objectives. Composition analysis is performed based on the QoS properties and requirements to find solutions (component selections and parameter settings) that can optimize the QoS objectives while satisfying the QoS constraints. We use a multiobjective concept to model the composition analysis problem and use an evolutionary algorithm to determine the Pareto-optimal solutions efficiently.","PeriodicalId":291045,"journal":{"name":"11th IEEE Real Time and Embedded Technology and Applications Symposium","volume":"125 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131967664","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Modern embedded systems are typically integrated as multiprocessor system on chips, and are often characterized by the complex behaviors and dependencies that system components exhibit. Different events that trigger such systems normally cause different execution demands, depending on their event type as well as on the task they are processed by, leading to complex workload correlations. For example in data processing systems, the size of an events payload data will typically determine its execution demand on most or all system components, leading to highly correlated workloads. Performance analysis of such complex system is often very difficult, and conventional analysis methods have no means to capture the possible existence of workload correlations. This leads to overly pessimistic analysis results, and thus to too expensive system designs with considerable performance reserves. We propose an abstract model to characterize and capture workload correlations present in a system architecture, and we show how the captured additional system information can be incorporated into an existing framework for modular performance analysis of embedded systems. We also present a method to analytically obtain the proposed abstract workload correlation model from a typical system specification. The applicability of our approach and its advantages over conventional performance analysis methods is shown in a detailed case study of a multiprocessor system on chip, where the analysis results obtained with our approach are considerably improved compared to the results obtained with conventional analysis methods.
{"title":"Characterizing workload correlations in multi processor hard real-time systems","authors":"E. Wandeler, L. Thiele","doi":"10.1109/RTAS.2005.13","DOIUrl":"https://doi.org/10.1109/RTAS.2005.13","url":null,"abstract":"Modern embedded systems are typically integrated as multiprocessor system on chips, and are often characterized by the complex behaviors and dependencies that system components exhibit. Different events that trigger such systems normally cause different execution demands, depending on their event type as well as on the task they are processed by, leading to complex workload correlations. For example in data processing systems, the size of an events payload data will typically determine its execution demand on most or all system components, leading to highly correlated workloads. Performance analysis of such complex system is often very difficult, and conventional analysis methods have no means to capture the possible existence of workload correlations. This leads to overly pessimistic analysis results, and thus to too expensive system designs with considerable performance reserves. We propose an abstract model to characterize and capture workload correlations present in a system architecture, and we show how the captured additional system information can be incorporated into an existing framework for modular performance analysis of embedded systems. We also present a method to analytically obtain the proposed abstract workload correlation model from a typical system specification. The applicability of our approach and its advantages over conventional performance analysis methods is shown in a detailed case study of a multiprocessor system on chip, where the analysis results obtained with our approach are considerably improved compared to the results obtained with conventional analysis methods.","PeriodicalId":291045,"journal":{"name":"11th IEEE Real Time and Embedded Technology and Applications Symposium","volume":"59 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132169410","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
J. Loyall, R. Schantz, D. Corman, J. Paunicka, Sylvester Fernandez
As computer systems become increasingly internetworked, there is a growing class of distributed realtime embedded (DRE) applications that have characteristics and present challenges beyond those of traditional embedded systems. They involve many heterogeneous nodes and links, shared and constrained resources, and are deployed in dynamic environments with changing participants. In this paper, we present a representative DRE application of medium scale that we are developing for the DARPA PCES program. This application consists of several unmanned aerial vehicles, command and control centers, and ground based combat vehicles to perform surveillance, detection, and tracking of time critical targets, an ever increasing threat in today's world. We describe the application, the scenario in which the application is being demonstrated, and issues and challenges associated with developing a DRE application of this complexity.
{"title":"A distributed real-time embedded application for surveillance, detection, and tracking of time critical targets","authors":"J. Loyall, R. Schantz, D. Corman, J. Paunicka, Sylvester Fernandez","doi":"10.1109/RTAS.2005.1","DOIUrl":"https://doi.org/10.1109/RTAS.2005.1","url":null,"abstract":"As computer systems become increasingly internetworked, there is a growing class of distributed realtime embedded (DRE) applications that have characteristics and present challenges beyond those of traditional embedded systems. They involve many heterogeneous nodes and links, shared and constrained resources, and are deployed in dynamic environments with changing participants. In this paper, we present a representative DRE application of medium scale that we are developing for the DARPA PCES program. This application consists of several unmanned aerial vehicles, command and control centers, and ground based combat vehicles to perform surveillance, detection, and tracking of time critical targets, an ever increasing threat in today's world. We describe the application, the scenario in which the application is being demonstrated, and issues and challenges associated with developing a DRE application of this complexity.","PeriodicalId":291045,"journal":{"name":"11th IEEE Real Time and Embedded Technology and Applications Symposium","volume":"81 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123246928","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper describes a solution for online timestamping in a distributed architecture embedded in an experimental vehicle. Interval timestamping is used, taking into consideration sensor latency, transmission delay and clock granularity. This solution does not change local system clocks, so that the network configuration can change without affecting timestamping precision. All nodes of the network are connected via a synchronous bus network (here, the FireWire, IEEE 1394). The bus clock is used to estimate the drift of all computer clocks and to exchange data timestamps with high precision. Experimental simulations show the advantages of this solution. The method is well adapted to dynamic applications, where data timestamping is important for real time considerations. An application in the field of intelligent vehicles is then described.
{"title":"On-line timestamping synchronization in distributed sensor architectures","authors":"O. Bezet, V. Berge-Cherfaoui","doi":"10.1109/RTAS.2005.36","DOIUrl":"https://doi.org/10.1109/RTAS.2005.36","url":null,"abstract":"This paper describes a solution for online timestamping in a distributed architecture embedded in an experimental vehicle. Interval timestamping is used, taking into consideration sensor latency, transmission delay and clock granularity. This solution does not change local system clocks, so that the network configuration can change without affecting timestamping precision. All nodes of the network are connected via a synchronous bus network (here, the FireWire, IEEE 1394). The bus clock is used to estimate the drift of all computer clocks and to exchange data timestamps with high precision. Experimental simulations show the advantages of this solution. The method is well adapted to dynamic applications, where data timestamping is important for real time considerations. An application in the field of intelligent vehicles is then described.","PeriodicalId":291045,"journal":{"name":"11th IEEE Real Time and Embedded Technology and Applications Symposium","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125464744","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}