Pub Date : 2020-08-01DOI: 10.1109/RTCSA50079.2020.9203582
Pratham Oza, Thidapat Chantem, Pamela M. Murray-Tuite
Efficient traffic control remains a challenging task, especially during and after special events such as emergency vehicle traversals or blocked links due to disabled vehicles. While existing approaches aim to reduce travel delays, they do not consider recovery from spillbacks caused by such interruptions in the traffic network. This paper (1) presents an optimal algorithm that maximizes the traffic flow through the road network while ensuring that spillbacks do not occur during normal operations, (2) proposes an effective, predictable mitigation strategy to recover from spillbacks caused by special events and which may have propagated through multiple links and/or intersections in the network, and (3) provides worst-case wait time bounds as well as recovery time bounds associated with the proposed techniques. Compared to existing approaches, our optimal strategy shows a 53.2% improvement in worst-case travel times. Additionally, our mitigation strategy can recover from spillbacks that have propagated through multiple links in the network by up to 50.9% quicker than the existing approaches.
{"title":"A Coordinated Spillback-Aware Traffic Optimization and Recovery at Multiple Intersections","authors":"Pratham Oza, Thidapat Chantem, Pamela M. Murray-Tuite","doi":"10.1109/RTCSA50079.2020.9203582","DOIUrl":"https://doi.org/10.1109/RTCSA50079.2020.9203582","url":null,"abstract":"Efficient traffic control remains a challenging task, especially during and after special events such as emergency vehicle traversals or blocked links due to disabled vehicles. While existing approaches aim to reduce travel delays, they do not consider recovery from spillbacks caused by such interruptions in the traffic network. This paper (1) presents an optimal algorithm that maximizes the traffic flow through the road network while ensuring that spillbacks do not occur during normal operations, (2) proposes an effective, predictable mitigation strategy to recover from spillbacks caused by special events and which may have propagated through multiple links and/or intersections in the network, and (3) provides worst-case wait time bounds as well as recovery time bounds associated with the proposed techniques. Compared to existing approaches, our optimal strategy shows a 53.2% improvement in worst-case travel times. Additionally, our mitigation strategy can recover from spillbacks that have propagated through multiple links in the network by up to 50.9% quicker than the existing approaches.","PeriodicalId":38446,"journal":{"name":"International Journal of Embedded and Real-Time Communication Systems (IJERTCS)","volume":"64 1","pages":"1-10"},"PeriodicalIF":0.7,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82234146","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-08-01DOI: 10.1109/rtcsa50079.2020.9203705
N. Mitton, Dario Bruneo
Message from the General and Program co-Chairs
总主席和项目联合主席的致辞
{"title":"Message from the General and Program co-Chairs","authors":"N. Mitton, Dario Bruneo","doi":"10.1109/rtcsa50079.2020.9203705","DOIUrl":"https://doi.org/10.1109/rtcsa50079.2020.9203705","url":null,"abstract":"Message from the General and Program co-Chairs","PeriodicalId":38446,"journal":{"name":"International Journal of Embedded and Real-Time Communication Systems (IJERTCS)","volume":"119 1","pages":"1-1"},"PeriodicalIF":0.7,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77953875","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-08-01DOI: 10.1109/rtcsa50079.2020.9203736
{"title":"[Copyright notice]","authors":"","doi":"10.1109/rtcsa50079.2020.9203736","DOIUrl":"https://doi.org/10.1109/rtcsa50079.2020.9203736","url":null,"abstract":"","PeriodicalId":38446,"journal":{"name":"International Journal of Embedded and Real-Time Communication Systems (IJERTCS)","volume":"2 1","pages":""},"PeriodicalIF":0.7,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88850881","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The paper presents a real-time computing framework for intelligent real-time edge services, on behalf of local embedded devices that are themselves unable to support extensive computations. The work contributes to a new direction in realtime computing that develops scheduling algorithms for machine intelligence tasks that enable anytime prediction. We show that deep neural network workflows can be cast as imprecise computations, each with a mandatory part and (several) optional parts whose execution utility depends on input data. With our design, deep neural networks can be preempted before their completion and support anytime inference. The goal of the realtime scheduler is to maximize the average accuracy of deep neural network outputs while meeting task deadlines, thanks to opportunistic shedding of the least necessary optional parts. The work is motivated by the proliferation of increasingly ubiquitous but resource-constrained embedded devices (for applications ranging from autonomous cars to the Internet of Things) and the desire to develop services that endow them with intelligence. Experiments on recent GPU hardware and a state of the art deep neural network for machine vision illustrate that our scheme can increase the overall accuracy by 10% ∼ 20% while incurring (nearly) no deadline misses.
{"title":"Scheduling Real-time Deep Learning Services as Imprecise Computations","authors":"Shuochao Yao, Yifan Hao, Yiran Zhao, Huajie Shao, Dongxin Liu, Shengzhong Liu, Tianshi Wang, Jinyang Li, T. Abdelzaher","doi":"10.1109/RTCSA50079.2020.9203676","DOIUrl":"https://doi.org/10.1109/RTCSA50079.2020.9203676","url":null,"abstract":"The paper presents a real-time computing framework for intelligent real-time edge services, on behalf of local embedded devices that are themselves unable to support extensive computations. The work contributes to a new direction in realtime computing that develops scheduling algorithms for machine intelligence tasks that enable anytime prediction. We show that deep neural network workflows can be cast as imprecise computations, each with a mandatory part and (several) optional parts whose execution utility depends on input data. With our design, deep neural networks can be preempted before their completion and support anytime inference. The goal of the realtime scheduler is to maximize the average accuracy of deep neural network outputs while meeting task deadlines, thanks to opportunistic shedding of the least necessary optional parts. The work is motivated by the proliferation of increasingly ubiquitous but resource-constrained embedded devices (for applications ranging from autonomous cars to the Internet of Things) and the desire to develop services that endow them with intelligence. Experiments on recent GPU hardware and a state of the art deep neural network for machine vision illustrate that our scheme can increase the overall accuracy by 10% ∼ 20% while incurring (nearly) no deadline misses.","PeriodicalId":38446,"journal":{"name":"International Journal of Embedded and Real-Time Communication Systems (IJERTCS)","volume":"83 1","pages":"1-10"},"PeriodicalIF":0.7,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74958810","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-08-01DOI: 10.1109/RTCSA50079.2020.9203584
P. Cai, Songlin Du, T. Ikenaga
High frame rate and ultra-low delay matching system plays an increasingly important role in human-machine interactive applications, which demands better experience and higher accuracy. Foreground detection is an indispensable preprocessing step to make the system suitable for complex scenes. Although many foreground detection algorithms have been proposed, few can achieve high speed in hardware due to their high complexity or high consumption. Based on the foreground detection algorithm ViBe, this paper proposes a local spatio-temporal propagation based adaptive model generation and update strategy for high frame rate and ultra-low delay foreground detection. Our algorithm predicts whether a region is a foreground by setting up detecting points, thereby adaptively adjusting the number of pixels that needs to be modeled. Secondly, the local linear illumination correlation is used to update models, which makes the algorithm more robust to illumination changes. The evaluation results show that the proposed algorithm successfully achieves real-time processing on the field-programmable gate array (FPGA) at a resolution of $mathbf{640}timesmathbf{480}$ pixels, with a delay of 0.908ms/frame.
{"title":"Local Spatio-Temporal Propagation Based Adaptive Model Generation and Update for High Frame Rate and Ultra-Low Delay Foreground Detection","authors":"P. Cai, Songlin Du, T. Ikenaga","doi":"10.1109/RTCSA50079.2020.9203584","DOIUrl":"https://doi.org/10.1109/RTCSA50079.2020.9203584","url":null,"abstract":"High frame rate and ultra-low delay matching system plays an increasingly important role in human-machine interactive applications, which demands better experience and higher accuracy. Foreground detection is an indispensable preprocessing step to make the system suitable for complex scenes. Although many foreground detection algorithms have been proposed, few can achieve high speed in hardware due to their high complexity or high consumption. Based on the foreground detection algorithm ViBe, this paper proposes a local spatio-temporal propagation based adaptive model generation and update strategy for high frame rate and ultra-low delay foreground detection. Our algorithm predicts whether a region is a foreground by setting up detecting points, thereby adaptively adjusting the number of pixels that needs to be modeled. Secondly, the local linear illumination correlation is used to update models, which makes the algorithm more robust to illumination changes. The evaluation results show that the proposed algorithm successfully achieves real-time processing on the field-programmable gate array (FPGA) at a resolution of $mathbf{640}timesmathbf{480}$ pixels, with a delay of 0.908ms/frame.","PeriodicalId":38446,"journal":{"name":"International Journal of Embedded and Real-Time Communication Systems (IJERTCS)","volume":"5 1","pages":"1-6"},"PeriodicalIF":0.7,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72919475","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-08-01DOI: 10.1109/RTCSA50079.2020.9203738
Iljoo Baek, Zhihao Zhu, Sourav Panda, N. K. Srinivasan, Soheil Samii, R. Rajkumar
Hardware accelerators such as GP-GPUs, Tensor Cores, and Deep-Learning Accelerators (DLA) are increasingly being used in real-time settings such as autonomous vehicles (AVs). In such deployments, any software errors and process failures in hardware systems can lead to critical faults in AVs. Therefore, assessing and mitigating hardware accelerator faults are critical requirements for safety-critical systems. Past work on this subject focused on simulated and injected software and hardware faults to understand and analyze the behavior of the software stack and the entire system. However, programming errors and process failures caused when using software frameworks must also be considered. In this paper, we present experiments which show that widely used deep-learning frameworks are vulnerable to programming mistakes and errors. We first focus on memory-related programming errors caused by applications using deep-learning frameworks that facilitate high-performance inferencing. We next find that a reset to recover from any fault imposes significant time penalties in reloading a pre-trained deep neural network model. To reduce these fault recovery times, we propose fault recovery mechanisms that checkpoint and resume the network based on the inference stage when an error is detected. Finally, we substantiate the practical feasibility of our approach and evaluate the improvement in recovery times11A demo video clip demonstrating our recovery algorithm has been uploaded to Youtube: https://www.youtube.com/watch?v=xwUYdJdA5oM.. We use a case-study with real-world applications on an Nvidia GeForce GTX 1070 GPU and an Nvidia Xavier embedded platform, which is commonly used by multiple automotive OEMs.
{"title":"Error Vulnerabilities and Fault Recovery in Deep-Learning Frameworks for Hardware Accelerators","authors":"Iljoo Baek, Zhihao Zhu, Sourav Panda, N. K. Srinivasan, Soheil Samii, R. Rajkumar","doi":"10.1109/RTCSA50079.2020.9203738","DOIUrl":"https://doi.org/10.1109/RTCSA50079.2020.9203738","url":null,"abstract":"Hardware accelerators such as GP-GPUs, Tensor Cores, and Deep-Learning Accelerators (DLA) are increasingly being used in real-time settings such as autonomous vehicles (AVs). In such deployments, any software errors and process failures in hardware systems can lead to critical faults in AVs. Therefore, assessing and mitigating hardware accelerator faults are critical requirements for safety-critical systems. Past work on this subject focused on simulated and injected software and hardware faults to understand and analyze the behavior of the software stack and the entire system. However, programming errors and process failures caused when using software frameworks must also be considered. In this paper, we present experiments which show that widely used deep-learning frameworks are vulnerable to programming mistakes and errors. We first focus on memory-related programming errors caused by applications using deep-learning frameworks that facilitate high-performance inferencing. We next find that a reset to recover from any fault imposes significant time penalties in reloading a pre-trained deep neural network model. To reduce these fault recovery times, we propose fault recovery mechanisms that checkpoint and resume the network based on the inference stage when an error is detected. Finally, we substantiate the practical feasibility of our approach and evaluate the improvement in recovery times11A demo video clip demonstrating our recovery algorithm has been uploaded to Youtube: https://www.youtube.com/watch?v=xwUYdJdA5oM.. We use a case-study with real-world applications on an Nvidia GeForce GTX 1070 GPU and an Nvidia Xavier embedded platform, which is commonly used by multiple automotive OEMs.","PeriodicalId":38446,"journal":{"name":"International Journal of Embedded and Real-Time Communication Systems (IJERTCS)","volume":"1 1","pages":"1-10"},"PeriodicalIF":0.7,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72841361","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-08-01DOI: 10.1109/RTCSA50079.2020.9203575
S. Osborne, Shareef Ahmed, Saujas Nandi, James H. Anderson
Simultaneous multithreading (SMT) has the ability to dramatically improve real-time scheduling, but existing methods are cumbersome, frequently need specialized hardware, or are limited to producing table-based schedules. Here, an easily portable method for quickly applying SMT to priority-driven hard real-time systems is given. Using a combination of integer linear programming and heuristic bin-packing, a partitioned earliest-deadline-first (EDF) scheduler that takes advantage of SMT is produced. The integer linear programming and partitioning are done offline, but generally require only a few seconds, even given over a hundred tasks. A large-scale schedulability study is conducted, showing that compared to partitioned scheduling without SMT, the schedulable utilization for the considered hardware platform is nearly doubled in the best cases.
{"title":"Exploiting Simultaneous Multithreading in Priority-Driven Hard Real-Time Systems","authors":"S. Osborne, Shareef Ahmed, Saujas Nandi, James H. Anderson","doi":"10.1109/RTCSA50079.2020.9203575","DOIUrl":"https://doi.org/10.1109/RTCSA50079.2020.9203575","url":null,"abstract":"Simultaneous multithreading (SMT) has the ability to dramatically improve real-time scheduling, but existing methods are cumbersome, frequently need specialized hardware, or are limited to producing table-based schedules. Here, an easily portable method for quickly applying SMT to priority-driven hard real-time systems is given. Using a combination of integer linear programming and heuristic bin-packing, a partitioned earliest-deadline-first (EDF) scheduler that takes advantage of SMT is produced. The integer linear programming and partitioning are done offline, but generally require only a few seconds, even given over a hundred tasks. A large-scale schedulability study is conducted, showing that compared to partitioned scheduling without SMT, the schedulable utilization for the considered hardware platform is nearly doubled in the best cases.","PeriodicalId":38446,"journal":{"name":"International Journal of Embedded and Real-Time Communication Systems (IJERTCS)","volume":"113 1","pages":"1-10"},"PeriodicalIF":0.7,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77705701","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-08-01DOI: 10.1109/RTCSA50079.2020.9203594
A. Friebe, A. Papadopoulos, T. Nolte
It has been shown that in some robotic applications, where the execution times cannot be assumed to be independent and identically distributed, a Markov Chain with discrete emission distributions can be an appropriate model. In this paper we investigate whether execution times can be modeled as a Markov Chain with continuous Gaussian emission distributions. The main advantage of this approach is that the concept of distance is naturally incorporated. We propose a framework based on Hidden Markov Model (HMM) methods that 1) identifies the number of states in the Markov Model from observations and fits the Markov Model to observations, and 2) validates the proposed model with respect to observations. Specifically, we apply a tree-based cross-validation approach to automatically find a suitable number of states in the Markov model. The estimated models are validated against observations, using a data consistency approach based on log likelihood distributions under the proposed model. The framework is evaluated using two test cases executed on a Raspberry Pi Model 3B+ single-board computer running Arch Linux ARM patched with PREEMPT_RT. The first is a simple test program where execution times intentionally vary according to a Markov model, and the second is a video decompression using the ffmpeg program. The results show that in these cases the framework identifies Markov Chains with Gaussian emission distributions that are valid models with respect to the observations.
研究表明,在某些机器人应用中,当执行时间不能假设为独立且同分布时,具有离散发射分布的马尔可夫链可以作为合适的模型。本文研究了执行时间是否可以用具有连续高斯发射分布的马尔可夫链来建模。这种方法的主要优点是自然地包含了距离的概念。我们提出了一个基于隐马尔可夫模型(HMM)方法的框架,该框架1)从观测中识别马尔可夫模型中的状态数,并将马尔可夫模型拟合到观测值中,2)根据观测值验证所提出的模型。具体来说,我们应用基于树的交叉验证方法来自动找到马尔可夫模型中合适数量的状态。使用基于所提出模型下的对数似然分布的数据一致性方法,根据观测值对估计模型进行验证。该框架使用两个测试用例在Raspberry Pi Model 3B+单板计算机上执行,运行带有PREEMPT_RT补丁的Arch Linux ARM。第一个是一个简单的测试程序,其中执行时间会根据马尔可夫模型而有所不同,第二个是使用ffmpeg程序的视频解压缩。结果表明,在这些情况下,框架识别出具有高斯发射分布的马尔可夫链,这是相对于观测的有效模型。
{"title":"Identification and Validation of Markov Models with Continuous Emission Distributions for Execution Times","authors":"A. Friebe, A. Papadopoulos, T. Nolte","doi":"10.1109/RTCSA50079.2020.9203594","DOIUrl":"https://doi.org/10.1109/RTCSA50079.2020.9203594","url":null,"abstract":"It has been shown that in some robotic applications, where the execution times cannot be assumed to be independent and identically distributed, a Markov Chain with discrete emission distributions can be an appropriate model. In this paper we investigate whether execution times can be modeled as a Markov Chain with continuous Gaussian emission distributions. The main advantage of this approach is that the concept of distance is naturally incorporated. We propose a framework based on Hidden Markov Model (HMM) methods that 1) identifies the number of states in the Markov Model from observations and fits the Markov Model to observations, and 2) validates the proposed model with respect to observations. Specifically, we apply a tree-based cross-validation approach to automatically find a suitable number of states in the Markov model. The estimated models are validated against observations, using a data consistency approach based on log likelihood distributions under the proposed model. The framework is evaluated using two test cases executed on a Raspberry Pi Model 3B+ single-board computer running Arch Linux ARM patched with PREEMPT_RT. The first is a simple test program where execution times intentionally vary according to a Markov model, and the second is a video decompression using the ffmpeg program. The results show that in these cases the framework identifies Markov Chains with Gaussian emission distributions that are valid models with respect to the observations.","PeriodicalId":38446,"journal":{"name":"International Journal of Embedded and Real-Time Communication Systems (IJERTCS)","volume":"36 1","pages":"1-10"},"PeriodicalIF":0.7,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83715037","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-08-01DOI: 10.1109/RTCSA50079.2020.9203660
Son Dinh, C. Gill, Kunal Agrawal
Federated scheduling is a generalization of partitioned scheduling for parallel tasks on multiprocessors, and has been shown to be a competitive scheduling approach. However, federated scheduling may waste resources due to its dedicated allocation of processors to parallel tasks. In this work we introduce a novel algorithm for scheduling parallel tasks that require more than one processor to meet their deadlines (i.e., heavy tasks). The proposed algorithm computes a deterministic schedule for each heavy task based on its internal graph structure. It efficiently exploits the processors allocated to each task and thus reduces the number of processors required by the task. Experimental evaluation shows that our new federated scheduling algorithm significantly outperforms other state-of-the-art federated-based scheduling approaches, including semi-federated scheduling and reservation-based federated scheduling, that were developed to tackle resource waste in federated scheduling, and a stretching algorithm that also uses the tasks' graph structures.
{"title":"Efficient Deterministic Federated Scheduling for Parallel Real-Time Tasks","authors":"Son Dinh, C. Gill, Kunal Agrawal","doi":"10.1109/RTCSA50079.2020.9203660","DOIUrl":"https://doi.org/10.1109/RTCSA50079.2020.9203660","url":null,"abstract":"Federated scheduling is a generalization of partitioned scheduling for parallel tasks on multiprocessors, and has been shown to be a competitive scheduling approach. However, federated scheduling may waste resources due to its dedicated allocation of processors to parallel tasks. In this work we introduce a novel algorithm for scheduling parallel tasks that require more than one processor to meet their deadlines (i.e., heavy tasks). The proposed algorithm computes a deterministic schedule for each heavy task based on its internal graph structure. It efficiently exploits the processors allocated to each task and thus reduces the number of processors required by the task. Experimental evaluation shows that our new federated scheduling algorithm significantly outperforms other state-of-the-art federated-based scheduling approaches, including semi-federated scheduling and reservation-based federated scheduling, that were developed to tackle resource waste in federated scheduling, and a stretching algorithm that also uses the tasks' graph structures.","PeriodicalId":38446,"journal":{"name":"International Journal of Embedded and Real-Time Communication Systems (IJERTCS)","volume":"74 1","pages":"1-10"},"PeriodicalIF":0.7,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86323975","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-08-01DOI: 10.1109/rtcsa50079.2020.9203588
S. Son, Hakan Aydin, Takuya Azumi, R. J. Bril, L. Carnevali, H. Chwa, Youcheng Sun, Hyoseung Kim, Chang-Gun Lee, Hiroyuki Tomiyama
RTCSA 2020 Committees
RTCSA 2020委员会
{"title":"RTCSA 2020 Committees","authors":"S. Son, Hakan Aydin, Takuya Azumi, R. J. Bril, L. Carnevali, H. Chwa, Youcheng Sun, Hyoseung Kim, Chang-Gun Lee, Hiroyuki Tomiyama","doi":"10.1109/rtcsa50079.2020.9203588","DOIUrl":"https://doi.org/10.1109/rtcsa50079.2020.9203588","url":null,"abstract":"RTCSA 2020 Committees","PeriodicalId":38446,"journal":{"name":"International Journal of Embedded and Real-Time Communication Systems (IJERTCS)","volume":"240 1","pages":"1-3"},"PeriodicalIF":0.7,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77137790","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}