We consider networks in which each individual link is characterized by two delay parameters: a (usually very conservative) guaranteed upper bound on the worst-case delay, and an estimate of the delay that is typically encountered, across the link. Given a source and destination node on such a network and an upper bound on the end-to-end delay that can be tolerated, the objective is to determine routes they typically experience a small delay, while guaranteeing to respect the specified end-to-end upper bound under all circumstances. We formalize the problem of determining such routes as a shortest-paths problem on graphs, and derive algorithms for solving this problem optimally.
{"title":"Rapid Routing with Guaranteed Delay Bounds","authors":"Sanjoy Baruah","doi":"10.1109/RTSS.2018.00012","DOIUrl":"https://doi.org/10.1109/RTSS.2018.00012","url":null,"abstract":"We consider networks in which each individual link is characterized by two delay parameters: a (usually very conservative) guaranteed upper bound on the worst-case delay, and an estimate of the delay that is typically encountered, across the link. Given a source and destination node on such a network and an upper bound on the end-to-end delay that can be tolerated, the objective is to determine routes they typically experience a small delay, while guaranteeing to respect the specified end-to-end upper bound under all circumstances. We formalize the problem of determining such routes as a shortest-paths problem on graphs, and derive algorithms for solving this problem optimally.","PeriodicalId":294784,"journal":{"name":"2018 IEEE Real-Time Systems Symposium (RTSS)","volume":"142 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128810623","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Modern embedded cyber-physical systems are becoming entangled with the realm of deep neural networks (DNNs) towards increased autonomy. While applying DNNs can significantly improve the accuracy in making autonomous control decisions, a significant challenge is that DNNs are designed and developed on advanced hardware (e.g., GPU clusters), and will not easily meet strict timing requirements if deployed in a resource-constrained embedded computing environment. One interesting characteristic of DNNs is approximation, which can be used to satisfy real-time requirements by reducing DNNs' execution costs with reasonably sacrificed accuracy. In this paper, we propose ApNet, a timing-predictable runtime system that is able to guarantee deadlines of DNN workloads via efficient approximation. Rather than straightforwardly approximating DNNs, ApNet develops a DNN layer-aware approximation approach that smartly explores the trade-off between the approximation degree and the resulting execution reduction on a per-layer basis. To further reduce approximation-induced accuracy loss at runtime, ApNet explores a rather interesting observation that resource sharing and approximation can mutually supplement one another, particularly in a multi-tasking environment. We have implemented and extensively evaluated ApNet on a mix of 8 different DNN configurations on an NVIDIA Jetson TX2. Experimental results show that ApNet can guarantee timing predictability (i.e., meeting all deadlines), while incurring a reasonable accuracy loss. Moreover, accuracy can be improved by up to 8% via a resource sharing increase of 3.5x on average for overlapping DNN layers.
{"title":"ApNet: Approximation-Aware Real-Time Neural Network","authors":"Soroush Bateni, Cong Liu","doi":"10.1109/RTSS.2018.00017","DOIUrl":"https://doi.org/10.1109/RTSS.2018.00017","url":null,"abstract":"Modern embedded cyber-physical systems are becoming entangled with the realm of deep neural networks (DNNs) towards increased autonomy. While applying DNNs can significantly improve the accuracy in making autonomous control decisions, a significant challenge is that DNNs are designed and developed on advanced hardware (e.g., GPU clusters), and will not easily meet strict timing requirements if deployed in a resource-constrained embedded computing environment. One interesting characteristic of DNNs is approximation, which can be used to satisfy real-time requirements by reducing DNNs' execution costs with reasonably sacrificed accuracy. In this paper, we propose ApNet, a timing-predictable runtime system that is able to guarantee deadlines of DNN workloads via efficient approximation. Rather than straightforwardly approximating DNNs, ApNet develops a DNN layer-aware approximation approach that smartly explores the trade-off between the approximation degree and the resulting execution reduction on a per-layer basis. To further reduce approximation-induced accuracy loss at runtime, ApNet explores a rather interesting observation that resource sharing and approximation can mutually supplement one another, particularly in a multi-tasking environment. We have implemented and extensively evaluated ApNet on a mix of 8 different DNN configurations on an NVIDIA Jetson TX2. Experimental results show that ApNet can guarantee timing predictability (i.e., meeting all deadlines), while incurring a reasonable accuracy loss. Moreover, accuracy can be improved by up to 8% via a resource sharing increase of 3.5x on average for overlapping DNN layers.","PeriodicalId":294784,"journal":{"name":"2018 IEEE Real-Time Systems Symposium (RTSS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128815418","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kilho Lee, Taejune Park, Minsu Kim, H. Chwa, Jinkyu Lee, Seungwon Shin, I. Shin
In this paper, we present the first approach to support mixed-criticality (MC) flow scheduling on switched Ethernet networks leveraging an emerging network architecture, Software-Defined Networking (SDN). Though SDN provides flexible and programmatic ways to control packet forwarding and scheduling, it yet raises several challenges to enable real-time MC flow scheduling on SDN, including i) how to handle (i.e., drop or reprioritize) out-of-mode packets in the middle of the network when the criticality mode changes, and ii) how the mode change affects end-to-end transmission delays. Addressing such challenges, we develop MC-SDN that supports real-time MC flow scheduling by extending SDN-enabled switches and OpenFlow protocols. It manages and schedules MC packets in different ways depending on the system criticality mode. To this end, we carefully design the mode change protocol that provides analytic mode change delay bound, and then resolve implementation issues for system architecture. For evaluation, we implement a prototype of MC-SDN on top of Open vSwitch, and integrate it into a real world network testbed as well as a 1/10 autonomous vehicle. Our extensive evaluations with the network testbed and vehicle deployment show that MC-SDN supports MC flow scheduling with minimal delays on forwarding rule updates and it brings a significant improvement in safety in a real-world application scenario.
{"title":"MC-SDN: Supporting Mixed-Criticality Scheduling on Switched-Ethernet Using Software-Defined Networking","authors":"Kilho Lee, Taejune Park, Minsu Kim, H. Chwa, Jinkyu Lee, Seungwon Shin, I. Shin","doi":"10.1109/RTSS.2018.00045","DOIUrl":"https://doi.org/10.1109/RTSS.2018.00045","url":null,"abstract":"In this paper, we present the first approach to support mixed-criticality (MC) flow scheduling on switched Ethernet networks leveraging an emerging network architecture, Software-Defined Networking (SDN). Though SDN provides flexible and programmatic ways to control packet forwarding and scheduling, it yet raises several challenges to enable real-time MC flow scheduling on SDN, including i) how to handle (i.e., drop or reprioritize) out-of-mode packets in the middle of the network when the criticality mode changes, and ii) how the mode change affects end-to-end transmission delays. Addressing such challenges, we develop MC-SDN that supports real-time MC flow scheduling by extending SDN-enabled switches and OpenFlow protocols. It manages and schedules MC packets in different ways depending on the system criticality mode. To this end, we carefully design the mode change protocol that provides analytic mode change delay bound, and then resolve implementation issues for system architecture. For evaluation, we implement a prototype of MC-SDN on top of Open vSwitch, and integrate it into a real world network testbed as well as a 1/10 autonomous vehicle. Our extensive evaluations with the network testbed and vehicle deployment show that MC-SDN supports MC flow scheduling with minimal delays on forwarding rule updates and it brings a significant improvement in safety in a real-world application scenario.","PeriodicalId":294784,"journal":{"name":"2018 IEEE Real-Time Systems Symposium (RTSS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131267086","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Effective management and provisioning of communication resources is as important in meeting the real-time requirements of smart city cyber physical systems (CPS) as managing computation resources is. The communication infrastructure in Smart cities often involves wireless mesh networks (WMNs). However, enforcing distributed and consistent control in WMNs is challenging since individual routers of a WMN maintain only local knowledge about each of its neighbors, which reflects only a partial visibility of the overall network and hence results in suboptimal resource management decisions. When WMNs must utilize emerging technologies, such as time-sensitive networking (TSN) for the most critical communication needs, e.g., controlling traffic and pedestrian lights, these challenges are further complicated. An attractive solution is to adopt Software Defined Networking (SDN), which offers a centralized, up-to-date view of the entire network by refactoring the wireless protocols into control and forwarding decisions. This paper presents ongoing work to overcome the key challenges and support the end-to-end real-time requirements of smart city CPS applications.
{"title":"Work-in-Progress: Towards Real-Time Smart City Communications using Software Defined Wireless Mesh Networking","authors":"Akram Hakiri, A. Gokhale","doi":"10.1109/RTSS.2018.00034","DOIUrl":"https://doi.org/10.1109/RTSS.2018.00034","url":null,"abstract":"Effective management and provisioning of communication resources is as important in meeting the real-time requirements of smart city cyber physical systems (CPS) as managing computation resources is. The communication infrastructure in Smart cities often involves wireless mesh networks (WMNs). However, enforcing distributed and consistent control in WMNs is challenging since individual routers of a WMN maintain only local knowledge about each of its neighbors, which reflects only a partial visibility of the overall network and hence results in suboptimal resource management decisions. When WMNs must utilize emerging technologies, such as time-sensitive networking (TSN) for the most critical communication needs, e.g., controlling traffic and pedestrian lights, these challenges are further complicated. An attractive solution is to adopt Software Defined Networking (SDN), which offers a centralized, up-to-date view of the entire network by refactoring the wireless protocols into control and forwarding decisions. This paper presents ongoing work to overcome the key challenges and support the end-to-end real-time requirements of smart city CPS applications.","PeriodicalId":294784,"journal":{"name":"2018 IEEE Real-Time Systems Symposium (RTSS)","volume":"73 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132919619","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Scheduling is a central yet challenging problem in real-time embedded systems. The Clock Constraint Specification Language (CCSL) provides a formalism to specify logical constraints of events in real-time embedded systems. A prerequisite for the events is that they must be schedulable under constraints. That is, there must be a schedule which controls all events to occur infinitely often. Schedulability analysis of CCSL raises important algorithmic problems such as computational complexity and design of efficient decision procedures. In this work, we compare the scheduling problems of CCSL specifications to the real-time scheduling problem. We show how to encode a simple task model in CCSL and discuss some benefits and differences compared to more classical scheduling strategies.
{"title":"Work-in-Progress: From Logical Time Scheduling to Real-Time Scheduling","authors":"F. Mallet, Min Zhang","doi":"10.1109/RTSS.2018.00025","DOIUrl":"https://doi.org/10.1109/RTSS.2018.00025","url":null,"abstract":"Scheduling is a central yet challenging problem in real-time embedded systems. The Clock Constraint Specification Language (CCSL) provides a formalism to specify logical constraints of events in real-time embedded systems. A prerequisite for the events is that they must be schedulable under constraints. That is, there must be a schedule which controls all events to occur infinitely often. Schedulability analysis of CCSL raises important algorithmic problems such as computational complexity and design of efficient decision procedures. In this work, we compare the scheduling problems of CCSL specifications to the real-time scheduling problem. We show how to encode a simple task model in CCSL and discuss some benefits and differences compared to more classical scheduling strategies.","PeriodicalId":294784,"journal":{"name":"2018 IEEE Real-Time Systems Symposium (RTSS)","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125814729","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Giorgio Audrito, Ferruccio Damiani, Mirko Viroli, Enrico Bini
As the density of sensing/computation/actuation nodes is increasing, it becomes more and more feasible and useful to think at an entire network of physical devices as a single, continuous space-time computing machine. The emergent behaviour of the whole software system is then induced by local computations deployed within each node and by the dynamics of the information diffusion. A relevant example of this distribution model is given by aggregate computing and its companion language field calculus, a minimal set of purely functional constructs used to manipulate distributed data structures evolving over space and time, and resulting in robustness to changes. In this paper, we study the convergence time of an archetypal and widely used component of distributed computations expressed in field calculus, called gradient: a fully-distributed estimation of distances over a metric space by a spanning tree. We provide an analytic result linking the quality of the output of a gradient to the amount of computing resources dedicated. The resulting error bounds are then exploited for network design, suggesting an optimal density value taking broadcast interferences into account. Finally, an empirical evaluation is performed validating the theoretical results.
{"title":"Distributed Real-Time Shortest-Paths Computations with the Field Calculus","authors":"Giorgio Audrito, Ferruccio Damiani, Mirko Viroli, Enrico Bini","doi":"10.1109/RTSS.2018.00013","DOIUrl":"https://doi.org/10.1109/RTSS.2018.00013","url":null,"abstract":"As the density of sensing/computation/actuation nodes is increasing, it becomes more and more feasible and useful to think at an entire network of physical devices as a single, continuous space-time computing machine. The emergent behaviour of the whole software system is then induced by local computations deployed within each node and by the dynamics of the information diffusion. A relevant example of this distribution model is given by aggregate computing and its companion language field calculus, a minimal set of purely functional constructs used to manipulate distributed data structures evolving over space and time, and resulting in robustness to changes. In this paper, we study the convergence time of an archetypal and widely used component of distributed computations expressed in field calculus, called gradient: a fully-distributed estimation of distances over a metric space by a spanning tree. We provide an analytic result linking the quality of the output of a gradient to the amount of computing resources dedicated. The resulting error bounds are then exploited for network design, suggesting an optimal density value taking broadcast interferences into account. Finally, an empirical evaluation is performed validating the theoretical results.","PeriodicalId":294784,"journal":{"name":"2018 IEEE Real-Time Systems Symposium (RTSS)","volume":"67 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114987544","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A fundamental problem in real-time computing is handling device input and output in a timely manner. For example, a control system might require input data from a sensor to be sampled and processed at a regular rate so that output signals to actuators occur within specific delay bounds. Input/output (I/O) devices connect to the host computer using different types of bus interfaces. One of the most popular interfaces in use today is the universal serial bus (USB). USB is now ubiquitous, in part due to its support for many classes of devices with simplified hardware needed to connect to the host. However, typical USB host controller drivers suffer from potential timing delays that affect the delivery of data between tasks and devices. Consequently, this paper introduces tuned pipes, a host controller driver and system framework that guarantees end-to-end latency and throughput requirements for I/O transfers. We expand on our earlier work involving USB 2.0 to support higher bandwidth USB 3.x communication. As a case study, we show how a USB-Controller Area Network (CAN) guarantees temporal isolation and end-to-end guarantees on communication between a set of peripheral devices and host tasks. A comparable USB-CAN bus setup using Linux is not able to achieve the same level of temporal guarantees, even when using SCHED_DEADLINE.
{"title":"Tuned Pipes: End-to-End Throughput and Delay Guarantees for USB Devices","authors":"A. Golchin, Zhuoqun Cheng, R. West","doi":"10.1109/RTSS.2018.00037","DOIUrl":"https://doi.org/10.1109/RTSS.2018.00037","url":null,"abstract":"A fundamental problem in real-time computing is handling device input and output in a timely manner. For example, a control system might require input data from a sensor to be sampled and processed at a regular rate so that output signals to actuators occur within specific delay bounds. Input/output (I/O) devices connect to the host computer using different types of bus interfaces. One of the most popular interfaces in use today is the universal serial bus (USB). USB is now ubiquitous, in part due to its support for many classes of devices with simplified hardware needed to connect to the host. However, typical USB host controller drivers suffer from potential timing delays that affect the delivery of data between tasks and devices. Consequently, this paper introduces tuned pipes, a host controller driver and system framework that guarantees end-to-end latency and throughput requirements for I/O transfers. We expand on our earlier work involving USB 2.0 to support higher bandwidth USB 3.x communication. As a case study, we show how a USB-Controller Area Network (CAN) guarantees temporal isolation and end-to-end guarantees on communication between a set of peripheral devices and host tasks. A comparable USB-CAN bus setup using Linux is not able to achieve the same level of temporal guarantees, even when using SCHED_DEADLINE.","PeriodicalId":294784,"journal":{"name":"2018 IEEE Real-Time Systems Symposium (RTSS)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132890209","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Daniel Casini, Alessandro Biondi, Geoffrey Nelissen, G. Buttazzo
The study of parallel task models executed with predictable scheduling approaches is a fundamental problem for real-time multiprocessor systems. Nevertheless, to date, limited efforts have been spent in analyzing the combination of partitioned scheduling and non-preemptive execution, which is arguably one of the most predictable schemes that can be envisaged to handle parallel tasks. This paper fills this gap by proposing an analysis for sporadic DAG tasks under partitioned fixed-priority scheduling where the computations corresponding to the nodes of the DAG are non-preemptively executed. The analysis has been achieved by means of segmented self-suspending tasks with nonpreemptable segments, for which a new fine-grained analysis is also proposed. The latter is shown to analytically dominate state-of-the-art approaches. A partitioning algorithm for DAG tasks is finally proposed. By means of experimental results, the proposed analysis has been compared against a previouslyproposed analysis for DAG tasks with non-preemptable nodes managed by global fixed-priority scheduling. The comparison revealed important improvements in terms of schedulability performance.
{"title":"Partitioned Fixed-Priority Scheduling of Parallel Tasks Without Preemptions","authors":"Daniel Casini, Alessandro Biondi, Geoffrey Nelissen, G. Buttazzo","doi":"10.1109/RTSS.2018.00056","DOIUrl":"https://doi.org/10.1109/RTSS.2018.00056","url":null,"abstract":"The study of parallel task models executed with predictable scheduling approaches is a fundamental problem for real-time multiprocessor systems. Nevertheless, to date, limited efforts have been spent in analyzing the combination of partitioned scheduling and non-preemptive execution, which is arguably one of the most predictable schemes that can be envisaged to handle parallel tasks. This paper fills this gap by proposing an analysis for sporadic DAG tasks under partitioned fixed-priority scheduling where the computations corresponding to the nodes of the DAG are non-preemptively executed. The analysis has been achieved by means of segmented self-suspending tasks with nonpreemptable segments, for which a new fine-grained analysis is also proposed. The latter is shown to analytically dominate state-of-the-art approaches. A partitioning algorithm for DAG tasks is finally proposed. By means of experimental results, the proposed analysis has been compared against a previouslyproposed analysis for DAG tasks with non-preemptable nodes managed by global fixed-priority scheduling. The comparison revealed important improvements in terms of schedulability performance.","PeriodicalId":294784,"journal":{"name":"2018 IEEE Real-Time Systems Symposium (RTSS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117255533","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yunhao Bai, Kuangyu Zheng, Zejiang Wang, Xiaorui Wang, Junmin Wang
Ensuring the real-time delivery of safety messages is an important research problem for Vehicle to Vehicle (V2V) communication. Unfortunately, existing work relies only on one or two pre-selected control channels for safety message communication, which can result in poor packet delivery and potential accident when the vehicle density is high. If all the available channels can be dynamically utilized when the control channel is having severe contention, safety messages can have a much better chance to meet their real-time deadlines. In this paper, we propose MC-Safe, a multi-channel V2V communication framework that monitors all the available channels and dynamically selects the best one for safety message transmission. MC-Safe features a novel channel negotiation scheme that allows all the vehicles involved in a potential accident to work collaboratively, in a distributed manner, for identifying a communication channel that meets the delay requirement. Our evaluation results both in simulation and on a hardware testbed with scaled cars show that MC-Safe outperforms existing single-channel solutions and other well-designed multi-channel baselines by having a 12.31% lower deadline miss ratio and an 8.21% higher packet delivery ratio on average.
{"title":"Dynamic Channel Selection for Real-Time Safety Message Communication in Vehicular Networks","authors":"Yunhao Bai, Kuangyu Zheng, Zejiang Wang, Xiaorui Wang, Junmin Wang","doi":"10.1109/RTSS.2018.00016","DOIUrl":"https://doi.org/10.1109/RTSS.2018.00016","url":null,"abstract":"Ensuring the real-time delivery of safety messages is an important research problem for Vehicle to Vehicle (V2V) communication. Unfortunately, existing work relies only on one or two pre-selected control channels for safety message communication, which can result in poor packet delivery and potential accident when the vehicle density is high. If all the available channels can be dynamically utilized when the control channel is having severe contention, safety messages can have a much better chance to meet their real-time deadlines. In this paper, we propose MC-Safe, a multi-channel V2V communication framework that monitors all the available channels and dynamically selects the best one for safety message transmission. MC-Safe features a novel channel negotiation scheme that allows all the vehicles involved in a potential accident to work collaboratively, in a distributed manner, for identifying a communication channel that meets the delay requirement. Our evaluation results both in simulation and on a hardware testbed with scaled cars show that MC-Safe outperforms existing single-channel solutions and other well-designed multi-channel baselines by having a 12.31% lower deadline miss ratio and an 8.21% higher packet delivery ratio on average.","PeriodicalId":294784,"journal":{"name":"2018 IEEE Real-Time Systems Symposium (RTSS)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115527005","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Khayatian, Mohammadreza Mehrabian, Aviral Shrivastava
Utilizing intelligent transportation infrastructures can significantly improve the throughput of intersections of Connected Autonomous Vehicles (CAV), where an Intersection Manager (IM) assigns a target velocity to incoming CAVs in order to achieve a high throughput. Since the IM calculates the assigned velocity for a CAV based on the model of the CAV, it's vulnerable to model mismatches and possible external disturbances. As a result, IM must consider a large safety buffer around all CAVs to ensure a safe scheduling, which greatly degrades the throughput. In addition, IM has to assign a relatively lower speed to CAVs that intend to make a turn at the intersection to avoid rollover. This issue reduces the throughput of the intersection even more. In this paper, we propose a space and time-aware technique to manage intersections of CAVs that is robust against external disturbances and model mismatches. In our method, RIM, IM is responsible for assigning a safe Time of Arrival (TOA) and Velocity of Arrival (VOA) to an approaching CAV such that trajectories of CAVs before and inside the intersection does not conflict. Accordingly, CAVs are responsible for determining and tracking an optimal trajectory to reach the intersection at the assigned TOA while driving at VOA. Since CAVs track a position trajectory, the effect of bounded model mismatch and external disturbances can be compensated. In addition, CAVs that intend to make a turn at the intersection do not need to drive at a slow velocity before entering the intersection. Results from conducting experiments on a 1/10 scale intersection of CAVs show that RIM can reduce the position error at the expected TOA by 18X on average in presence of up to 10% model mismatch and an external disturbance with an amplitude of 5% of max range. In total, our technique can achieve 2.7X better throughput on average compared to velocity assignment techniques.
{"title":"RIM: Robust Intersection Management for Connected Autonomous Vehicles","authors":"M. Khayatian, Mohammadreza Mehrabian, Aviral Shrivastava","doi":"10.1109/RTSS.2018.00014","DOIUrl":"https://doi.org/10.1109/RTSS.2018.00014","url":null,"abstract":"Utilizing intelligent transportation infrastructures can significantly improve the throughput of intersections of Connected Autonomous Vehicles (CAV), where an Intersection Manager (IM) assigns a target velocity to incoming CAVs in order to achieve a high throughput. Since the IM calculates the assigned velocity for a CAV based on the model of the CAV, it's vulnerable to model mismatches and possible external disturbances. As a result, IM must consider a large safety buffer around all CAVs to ensure a safe scheduling, which greatly degrades the throughput. In addition, IM has to assign a relatively lower speed to CAVs that intend to make a turn at the intersection to avoid rollover. This issue reduces the throughput of the intersection even more. In this paper, we propose a space and time-aware technique to manage intersections of CAVs that is robust against external disturbances and model mismatches. In our method, RIM, IM is responsible for assigning a safe Time of Arrival (TOA) and Velocity of Arrival (VOA) to an approaching CAV such that trajectories of CAVs before and inside the intersection does not conflict. Accordingly, CAVs are responsible for determining and tracking an optimal trajectory to reach the intersection at the assigned TOA while driving at VOA. Since CAVs track a position trajectory, the effect of bounded model mismatch and external disturbances can be compensated. In addition, CAVs that intend to make a turn at the intersection do not need to drive at a slow velocity before entering the intersection. Results from conducting experiments on a 1/10 scale intersection of CAVs show that RIM can reduce the position error at the expected TOA by 18X on average in presence of up to 10% model mismatch and an external disturbance with an amplitude of 5% of max range. In total, our technique can achieve 2.7X better throughput on average compared to velocity assignment techniques.","PeriodicalId":294784,"journal":{"name":"2018 IEEE Real-Time Systems Symposium (RTSS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130988241","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}