首页 > 最新文献

ACM Transactions on Embedded Computing Systems最新文献

英文 中文
Multi-Traffic Resource Optimization for Real-Time Applications with 5G Configured Grant Scheduling 利用 5G 配置授权调度为实时应用优化多流量资源
IF 2 3区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-05-28 DOI: 10.1145/3664621
Yungang Pan, Rouhollah Mahfouzi, Soheil Samii, Petru Eles, Zebo Peng

The fifth-generation (5G) technology standard in telecommunications is expected to support ultra-reliable low latency communication to enable real-time applications such as industrial automation and control. 5G configured grant (CG) scheduling features a pre-allocated periodicity-based scheduling approach, which reduces control signaling time and guarantees service quality. Although this enables 5G to support hard real-time periodic traffics, synthesizing the schedule efficiently and achieving high resource efficiency, while serving multiple communications, are still an open problem. In this work, we study the trade-off between scheduling flexibility and control overhead when performing CG scheduling. To address the CG scheduling problem, we first formulate it using satisfiability modulo theories (SMT) so that an SMT solver can be used to generate optimal solutions. To enhance scalability, we propose two heuristic approaches. The first one as the baseline, Co1, follows the basic idea of the 5G CG scheduling scheme that minimizes the control overhead. The second one, CoU, enables increased scheduling flexibility while considering the involved control overhead. The effectiveness and scalability of the proposed techniques and the superiority of CoU compared to Co1 have been evaluated using a large number of generated benchmarks as well as a realistic case study for industrial automation.

第五代(5G)电信技术标准有望支持超可靠的低延迟通信,从而实现工业自动化和控制等实时应用。5G 配置授予(CG)调度采用基于周期的预分配调度方法,可减少控制信令时间并保证服务质量。虽然这使 5G 能够支持硬实时周期性流量,但在服务于多种通信的同时,如何高效地合成调度并实现高资源效率仍是一个未决问题。在这项工作中,我们研究了在进行 CG 调度时如何权衡调度灵活性和控制开销。为了解决 CG 调度问题,我们首先使用可满足性模态理论(SMT)对其进行表述,以便使用 SMT 求解器生成最优解。为了提高可扩展性,我们提出了两种启发式方法。第一种是基线方法 Co1,它遵循 5G CG 调度方案的基本思想,能最大限度地减少控制开销。第二种方法,即 CoU,在考虑控制开销的同时提高了调度灵活性。通过使用大量生成的基准以及工业自动化的实际案例研究,评估了所提技术的有效性和可扩展性,以及 CoU 与 Co1 相比的优越性。
{"title":"Multi-Traffic Resource Optimization for Real-Time Applications with 5G Configured Grant Scheduling","authors":"Yungang Pan, Rouhollah Mahfouzi, Soheil Samii, Petru Eles, Zebo Peng","doi":"10.1145/3664621","DOIUrl":"https://doi.org/10.1145/3664621","url":null,"abstract":"<p>The fifth-generation (5G) technology standard in telecommunications is expected to support ultra-reliable low latency communication to enable real-time applications such as industrial automation and control. 5G configured grant (CG) scheduling features a pre-allocated periodicity-based scheduling approach, which reduces control signaling time and guarantees service quality. Although this enables 5G to support hard real-time periodic traffics, synthesizing the schedule efficiently and achieving high resource efficiency, while serving multiple communications, are still an open problem. In this work, we study the trade-off between scheduling flexibility and control overhead when performing CG scheduling. To address the CG scheduling problem, we first formulate it using satisfiability modulo theories (SMT) so that an SMT solver can be used to generate optimal solutions. To enhance scalability, we propose two heuristic approaches. The first one as the baseline, Co1, follows the basic idea of the 5G CG scheduling scheme that minimizes the control overhead. The second one, CoU, enables increased scheduling flexibility while considering the involved control overhead. The effectiveness and scalability of the proposed techniques and the superiority of CoU compared to Co1 have been evaluated using a large number of generated benchmarks as well as a realistic case study for industrial automation.</p>","PeriodicalId":50914,"journal":{"name":"ACM Transactions on Embedded Computing Systems","volume":"24 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2024-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141169425","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Dynamic Cluster Head Selection in WSN WSN 中的动态簇头选择
IF 2 3区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-05-25 DOI: 10.1145/3665867
Rupendra Pratap Singh Hada, Abhishek Srivastava

A Wireless Sensor Network (WSN) comprises an ad-hoc network of nodes laden with sensors that are used to monitor a region mostly in the outdoors and often not easily accessible. Despite exceptions, several deployments of WSN continue to grapple with the limitation of finite energy derived through batteries. Thus, it is imperative that the energy of a WSN be conserved and its life prolonged. An important direction of work to this end is towards the transmission of data between nodes in a manner that minimum energy is expended. One approach to doing this is cluster-based routing, wherein nodes in a WSN are organised into clusters, and transmission of data from the node is through a representative node called a cluster-head. Forming optimal clusters and choosing an optimal cluster-head is an NP-Hard problem. Significant work is done towards devising mechanisms to form clusters and choosing cluster heads to reduce the transmission overhead to a minimum. In this paper, an approach is proposed to create clusters and identify cluster heads that are near optimal. The approach involves two-stage clustering, with the clustering algorithm for each stage chosen through an exhaustive search. Furthermore, unlike existing approaches that choose a cluster-head on the basis of the residual energy of nodes, the proposed approach utilises three factors in addition to the residual energy, namely the distance of a node from the cluster centroid, the distance of a node from the final destination (base-station), and the connectivity of the node. The approach is shown to be effective and economical through extensive validation via simulations and through a real-world prototypical implementation.

无线传感器网络(WSN)是一个由装有传感器的节点组成的临时网络,用于监测大多在室外且通常不易到达的区域。尽管有例外情况,但一些 WSN 的部署仍然受到通过电池获取的有限能源的限制。因此,必须节约 WSN 的能源并延长其使用寿命。为此,一个重要的工作方向是以消耗最少能量的方式在节点之间传输数据。其中一种方法是基于簇的路由选择,将 WSN 中的节点组织成簇,通过一个称为簇头的代表性节点从节点传输数据。形成最佳簇和选择最佳簇头是一个 NP-Hard(近乎苛刻)问题。为了将传输开销降到最低,人们在设计形成簇和选择簇首的机制方面做了大量工作。本文提出了一种创建簇和识别接近最优簇头的方法。该方法包括两个阶段的聚类,每个阶段的聚类算法通过穷举搜索来选择。此外,与根据节点的剩余能量选择簇头的现有方法不同,建议的方法除了利用剩余能量外,还利用了三个因素,即节点与簇中心点的距离、节点与最终目的地(基站)的距离以及节点的连接性。通过模拟和实际原型实施的广泛验证,证明了该方法的有效性和经济性。
{"title":"Dynamic Cluster Head Selection in WSN","authors":"Rupendra Pratap Singh Hada, Abhishek Srivastava","doi":"10.1145/3665867","DOIUrl":"https://doi.org/10.1145/3665867","url":null,"abstract":"<p>A Wireless Sensor Network (WSN) comprises an ad-hoc network of nodes laden with sensors that are used to monitor a region mostly in the outdoors and often not easily accessible. Despite exceptions, several deployments of WSN continue to grapple with the limitation of finite energy derived through batteries. Thus, it is imperative that the energy of a WSN be conserved and its life prolonged. An important direction of work to this end is towards the transmission of data between nodes in a manner that minimum energy is expended. One approach to doing this is cluster-based routing, wherein nodes in a WSN are organised into clusters, and transmission of data from the node is through a representative node called a cluster-head. Forming optimal clusters and choosing an optimal cluster-head is an NP-Hard problem. Significant work is done towards devising mechanisms to form clusters and choosing cluster heads to reduce the transmission overhead to a minimum. In this paper, an approach is proposed to create clusters and identify cluster heads that are near optimal. The approach involves two-stage clustering, with the clustering algorithm for each stage chosen through an exhaustive search. Furthermore, unlike existing approaches that choose a cluster-head on the basis of the residual energy of nodes, the proposed approach utilises three factors in addition to the residual energy, namely the distance of a node from the cluster centroid, the distance of a node from the final destination (base-station), and the connectivity of the node. The approach is shown to be effective and economical through extensive validation via simulations and through a real-world prototypical implementation.</p>","PeriodicalId":50914,"journal":{"name":"ACM Transactions on Embedded Computing Systems","volume":"142 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2024-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141149096","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Lightweight Hardware-Based Cache Side-Channel Attack Detection for Edge Devices (Edge-CaSCADe) 针对边缘设备的基于硬件的轻量级缓存侧通道攻击检测(Edge-CaSCADe)
IF 2 3区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-05-11 DOI: 10.1145/3663673
Pavitra Bhade, Joseph Paturel, Olivier Sentieys, Sharad Sinha

Cache Side Channel Attacks (CSCA) have been haunting most processor architectures for decades now. Existing approaches to mitigation of such attacks have certain drawbacks namely software mishandling, performance overhead, low throughput due to false alarms, etc. Hence, “mitigation only when detected” should be the approach to minimize the effects of such drawbacks. We propose a novel methodology of fine-grained detection of timing-based CSCA using a hardware-based detection module.

We discuss the design, implementation, and use of our proposed detection module in processor architectures. Our approach successfully detects attacks that flush secret victim information from cache memory like Flush+Reload, Flush+Flush, Prime+Probe, Evict+Probe, and Prime+Abort, commonly known as cache timing attacks. Detection is on time with minimal performance overhead. The parameterizable number of counters used in our module allows detection of multiple attacks on multiple sensitive locations simultaneously. The fine-grained nature ensures negligible false alarms, severely reducing the need for any unnecessary mitigation. The proposed work is evaluated by synthesizing the entire detection algorithm as an attack detection block, Edge-CaSCADe, in a RISC-V processor as a target example. The detection results are checked under different workload conditions with respect to the number of attackers, the number of victims having RSA,AES and ECC based encryption schemes like ECIES, and on benchmark applications like MiBench and Embench. More than (98% ) detection accuracy within (2% ) of the beginning of an attack can be achieved with negligible false alarms. The detection module has an area and power overhead of (0.9% ) to (2% ) and (1% ) to (2.1% ) for the targeted RISC-V processor core without cache for 1 to 5 counters, respectively. The detection module does not affect the processor critical path and hence has no impact on its maximum operating frequency.

数十年来,高速缓存侧通道攻击(CSCA)一直困扰着大多数处理器架构。现有的缓解此类攻击的方法存在一些缺陷,如软件处理不当、性能开销大、误报导致吞吐量低等。因此,"只有在检测到时才采取缓解措施 "应该是将这些缺点的影响降到最低的方法。我们提出了一种使用基于硬件的检测模块对基于时序的 CSCA 进行细粒度检测的新方法。我们讨论了我们提出的检测模块在处理器架构中的设计、实现和使用。我们的方法成功地检测出了从高速缓存中清除秘密受害者信息的攻击,如 Flush+Reload、Flush+Flush、Prime+Probe、Evict+Probe 和 Prime+Abort,即通常所说的高速缓存定时攻击。检测及时,性能开销最小。我们的模块中使用的计数器数量是可参数化的,可以同时检测多个敏感位置上的多种攻击。细粒度的特性确保了可忽略不计的误报,从而大大减少了不必要的缓解措施。以 RISC-V 处理器为例,通过将整个检测算法合成为攻击检测模块 Edge-CaSCADe,对提出的工作进行了评估。在不同的工作负载条件下,根据攻击者的数量、受害者的数量、基于RSA、AES和ECC的加密方案(如ECIES)以及基准应用程序(如MiBench和Embench)检查了检测结果。在攻击开始的(2%)范围内,可以实现超过(98%)的检测精度,误报率可以忽略不计。对于1到5个计数器的无缓存目标RISC-V处理器内核,检测模块的面积和功耗开销分别为(0.9%)到(2%)和(1%)到(2.1%)。检测模块不影响处理器的关键路径,因此对其最大工作频率没有影响。
{"title":"Lightweight Hardware-Based Cache Side-Channel Attack Detection for Edge Devices (Edge-CaSCADe)","authors":"Pavitra Bhade, Joseph Paturel, Olivier Sentieys, Sharad Sinha","doi":"10.1145/3663673","DOIUrl":"https://doi.org/10.1145/3663673","url":null,"abstract":"<p>Cache Side Channel Attacks (CSCA) have been haunting most processor architectures for decades now. Existing approaches to mitigation of such attacks have certain drawbacks namely software mishandling, performance overhead, low throughput due to false alarms, etc. Hence, <i>“mitigation only when detected”</i> should be the approach to minimize the effects of such drawbacks. We propose a novel methodology of fine-grained detection of timing-based CSCA using a hardware-based detection module. </p><p>We discuss the design, implementation, and use of our proposed detection module in processor architectures. Our approach successfully detects attacks that flush secret victim information from cache memory like Flush+Reload, Flush+Flush, Prime+Probe, Evict+Probe, and Prime+Abort, commonly known as cache timing attacks. Detection is on time with minimal performance overhead. The parameterizable number of counters used in our module allows detection of multiple attacks on multiple sensitive locations simultaneously. The fine-grained nature ensures negligible false alarms, severely reducing the need for any unnecessary mitigation. The proposed work is evaluated by synthesizing the entire detection algorithm as an attack detection block, Edge-CaSCADe, in a RISC-V processor as a target example. The detection results are checked under different workload conditions with respect to the number of attackers, the number of victims having RSA,AES and ECC based encryption schemes like ECIES, and on benchmark applications like MiBench and Embench. More than (98% ) detection accuracy within (2% ) of the beginning of an attack can be achieved with negligible false alarms. The detection module has an area and power overhead of (0.9% ) to (2% ) and (1% ) to (2.1% ) for the targeted RISC-V processor core without cache for 1 to 5 counters, respectively. The detection module does not affect the processor critical path and hence has no impact on its maximum operating frequency.</p>","PeriodicalId":50914,"journal":{"name":"ACM Transactions on Embedded Computing Systems","volume":"23 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2024-05-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140941823","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Reordering Functions in Mobiles Apps for Reduced Size and Faster Start-Up 重新排序移动应用程序中的功能,以缩小规模并加快启动速度
IF 2 3区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-04-20 DOI: 10.1145/3660635
Ellis Hoag, Kyungwoo Lee, Julián Mestre, Sergey Pupyrev, YongKang Zhu

Function layout, also known as function reordering or function placement, is one of the most effective profile-guided compiler optimizations. By reordering functions in a binary, compilers can improve the performance of large-scale applications or reduce the compressed size of mobile applications. Although the technique has been extensively studied in the context of large-scale binaries, no study has thoroughly investigated function layout algorithms on mobile applications.

In this paper we develop the first principled solution for optimizing function layouts in the mobile space. To this end, we identify two key optimization goals: reducing the compressed code size and improving the cold start-up time of a mobile application. Then we propose a formal model for the layout problem, whose objective closely matches our goals, and a novel algorithm for optimizing the layout. The method is inspired by the classic balanced graph partitioning problem. We have carefully engineered and implemented the algorithm in an open-source compiler, LLVM. An extensive evaluation of the new method on large commercial mobile applications demonstrates improvements in start-up time and compressed size compared to the state-of-the-art approach.

函数布局又称函数重排或函数放置,是最有效的配置文件指导编译器优化之一。通过对二进制文件中的函数重新排序,编译器可以提高大规模应用程序的性能,或减少移动应用程序的压缩大小。虽然该技术已在大规模二进制文件中得到广泛研究,但还没有研究对移动应用程序的函数布局算法进行深入研究。在本文中,我们首次提出了在移动领域优化函数布局的原则性解决方案。为此,我们确定了两个关键的优化目标:减少压缩代码大小和改善移动应用程序的冷启动时间。然后,我们提出了布局问题的形式模型(其目标与我们的目标非常吻合)和优化布局的新型算法。该方法的灵感来自经典的平衡图分割问题。我们精心设计并在开源编译器 LLVM 中实现了该算法。我们在大型商业移动应用程序上对新方法进行了广泛的评估,结果表明,与最先进的方法相比,新方法在启动时间和压缩大小方面都有所改进。
{"title":"Reordering Functions in Mobiles Apps for Reduced Size and Faster Start-Up","authors":"Ellis Hoag, Kyungwoo Lee, Julián Mestre, Sergey Pupyrev, YongKang Zhu","doi":"10.1145/3660635","DOIUrl":"https://doi.org/10.1145/3660635","url":null,"abstract":"<p>Function layout, also known as function reordering or function placement, is one of the most effective profile-guided compiler optimizations. By reordering functions in a binary, compilers can improve the performance of large-scale applications or reduce the compressed size of mobile applications. Although the technique has been extensively studied in the context of large-scale binaries, no study has thoroughly investigated function layout algorithms on mobile applications. </p><p>In this paper we develop the first principled solution for optimizing function layouts in the mobile space. To this end, we identify two key optimization goals: reducing the compressed code size and improving the cold start-up time of a mobile application. Then we propose a formal model for the layout problem, whose objective closely matches our goals, and a novel algorithm for optimizing the layout. The method is inspired by the classic balanced graph partitioning problem. We have carefully engineered and implemented the algorithm in an open-source compiler, LLVM. An extensive evaluation of the new method on large commercial mobile applications demonstrates improvements in start-up time and compressed size compared to the state-of-the-art approach.</p>","PeriodicalId":50914,"journal":{"name":"ACM Transactions on Embedded Computing Systems","volume":"12 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2024-04-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140630512","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
NAVIDRO, a CARES architectural style for configuring drone co-simulation NAVIDRO,一种用于配置无人机协同仿真的 CARES 架构风格
IF 2 3区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-03-17 DOI: 10.1145/3651889
Loic Salmon, Pierre-Yves Pillain, Goulven Guillou, Jean-Philippe Babau

One primary objective of drone simulation is to evaluate diverse drone configurations and contexts aligned with specific user objectives. The initial challenge for simulator designers involves managing the heterogeneity of drone components, encompassing both software and hardware systems, as well as the drone’s behavior. To facilitate the integration of these diverse models, the Functional Mock-Up Interface (FMI) for Co-Simulation proposes a generic data-oriented interface. However, an additional challenge lies in simplifying the configuration of co-simulation, necessitating an approach to guide the modeling of parametric features and operational conditions such as failures or environment changes.

The paper addresses this challenge by introducing CARES, a Model-Driven Engineering (MDE) and component-based approach for designing drone simulators, integrating the Functional Mock-Up Interface (FMI) for Co-Simulation. The proposed models incorporate concepts from Component-Based Software Engineering (CBSE) and FMI. The NAVIDRO architectural style is presented for designing and configuring drone co-simulation. CARES utilizes a code generator to produce structural glue code (Java or C++), facilitating the integration of FMI-based domain-specific code. The approach is evaluated through the development of a simulator for navigation functions in an Autonomous Underwater Vehicle (AUV), demonstrating its effectiveness in assessing various AUV configurations and contexts.

无人机模拟的一个主要目的是评估与特定用户目标相一致的各种无人机配置和环境。模拟器设计人员面临的首要挑战是管理无人机组件的异质性,包括软件和硬件系统以及无人机的行为。为了便于整合这些不同的模型,协同仿真的功能模拟接口(FMI)提出了一种通用的面向数据的接口。然而,简化协同仿真的配置还面临另一个挑战,即需要一种方法来指导参数特征和运行条件(如故障或环境变化)的建模。针对这一挑战,本文介绍了模型驱动工程(MDE)和基于组件的无人机模拟器设计方法--CARES,并集成了用于协同仿真的功能模拟接口(FMI)。所提出的模型融合了基于组件的软件工程(CBSE)和 FMI 的概念。NAVIDRO 架构风格用于设计和配置无人机协同仿真。CARES 利用代码生成器生成结构胶合代码(Java 或 C++),促进了基于 FMI 的特定领域代码的集成。通过开发自主水下航行器(AUV)导航功能模拟器对该方法进行了评估,证明了其在评估各种 AUV 配置和环境方面的有效性。
{"title":"NAVIDRO, a CARES architectural style for configuring drone co-simulation","authors":"Loic Salmon, Pierre-Yves Pillain, Goulven Guillou, Jean-Philippe Babau","doi":"10.1145/3651889","DOIUrl":"https://doi.org/10.1145/3651889","url":null,"abstract":"<p>One primary objective of drone simulation is to evaluate diverse drone configurations and contexts aligned with specific user objectives. The initial challenge for simulator designers involves managing the heterogeneity of drone components, encompassing both software and hardware systems, as well as the drone’s behavior. To facilitate the integration of these diverse models, the Functional Mock-Up Interface (FMI) for Co-Simulation proposes a generic data-oriented interface. However, an additional challenge lies in simplifying the configuration of co-simulation, necessitating an approach to guide the modeling of parametric features and operational conditions such as failures or environment changes. </p><p>The paper addresses this challenge by introducing CARES, a Model-Driven Engineering (MDE) and component-based approach for designing drone simulators, integrating the Functional Mock-Up Interface (FMI) for Co-Simulation. The proposed models incorporate concepts from Component-Based Software Engineering (CBSE) and FMI. The NAVIDRO architectural style is presented for designing and configuring drone co-simulation. CARES utilizes a code generator to produce structural glue code (Java or C++), facilitating the integration of FMI-based domain-specific code. The approach is evaluated through the development of a simulator for navigation functions in an Autonomous Underwater Vehicle (AUV), demonstrating its effectiveness in assessing various AUV configurations and contexts.</p>","PeriodicalId":50914,"journal":{"name":"ACM Transactions on Embedded Computing Systems","volume":"19 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2024-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140150461","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
REC: REtime Convolutional layers to fully exploit harvested energy for ReRAM-based CNN accelerators REC:REtime 卷积层,充分利用基于 ReRAM 的 CNN 加速器的收获能量
IF 2 3区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-03-15 DOI: 10.1145/3652593
Kunyu Zhou, Keni Qiu

As the Internet of Things (IoTs) increasingly combines AI technology, it is a trend to deploy neural network algorithms at edges and make IoT devices more intelligent than ever. Moreover, energy-harvesting technology-based IoT devices have shown the advantages of green and low-carbon economy, convenient maintenance, and theoretically infinite lifetime, etc. However, the harvested energy is often unstable, resulting in low performance due to the fact that a fixed load cannot sufficiently utilize the harvested energy. To address this problem, recent works focusing on ReRAM-based convolutional neural networks (CNN) accelerators under harvested energy have proposed hardware/software optimizations. However, those works have overlooked the mismatch between the power requirement of different CNN layers and the variation of harvested power.

Motivated by the above observation, this paper proposes a novel strategy, called REC, that retimes convolutional layers of CNN inferences to improve the performance and energy efficiency of energy harvesting ReRAM-based accelerators. Specifically, at the offline stage, REC defines different power levels to fit the power requirements of different convolutional layers. At runtime, instead of sequentially executing the convolutional layers of an inference one by one, REC retimes the execution timeframe of different convolutional layers so as to accommodate different CNN layers to the changing power inputs. What is more, REC provides a parallel strategy to fully utilize very high power inputs. Moreover, a case study is presented to show that REC is effective to improve the real-time accomplishment of periodical critical inferences because REC provides an opportunity for critical inferences to preempt the process window with a high power supply. Our experimental results show that the proposed REC scheme achieves an average performance improvement of 6.1 × (up to 16.5 ×) compared to the traditional strategy without the REC idea. The case study results show that the REC scheme can significantly improve the success rate of periodical critical inferences’ real-time accomplishment.

随着物联网(IoTs)越来越多地与人工智能技术相结合,在边缘部署神经网络算法,使物联网设备比以往任何时候都更加智能化已是大势所趋。此外,基于能量采集技术的物联网设备具有绿色低碳经济、维护方便、理论上寿命无限等优点。然而,由于固定负载无法充分利用采集到的能量,采集到的能量往往不稳定,导致性能低下。为解决这一问题,最近一些研究重点关注基于 ReRAM 的卷积神经网络(CNN)加速器,并提出了硬件/软件优化方案。然而,这些研究忽略了不同 CNN 层的功率要求与采集功率变化之间的不匹配。受上述观察结果的启发,本文提出了一种名为 REC 的新策略,对 CNN 推断的卷积层进行重新计时,以提高基于能量收集 ReRAM 的加速器的性能和能效。具体来说,在离线阶段,REC 定义了不同的功率级别,以适应不同卷积层的功率要求。在运行时,REC 不再按顺序逐个执行推理的卷积层,而是重新调整不同卷积层的执行时间,以适应不同 CNN 层的功率输入变化。此外,REC 还提供了一种并行策略,以充分利用非常高的功率输入。此外,我们还通过案例研究表明,REC 能有效提高周期性关键推理的实时性,因为 REC 为关键推理提供了一个机会,使其能够抢先在高能量输入的进程窗口中运行。实验结果表明,与没有 REC 思想的传统策略相比,我们提出的 REC 方案平均性能提高了 6.1 倍(最高可达 16.5 倍)。案例研究结果表明,REC 方案能显著提高周期性关键推理实时完成的成功率。
{"title":"REC: REtime Convolutional layers to fully exploit harvested energy for ReRAM-based CNN accelerators","authors":"Kunyu Zhou, Keni Qiu","doi":"10.1145/3652593","DOIUrl":"https://doi.org/10.1145/3652593","url":null,"abstract":"<p>As the Internet of Things (IoTs) increasingly combines AI technology, it is a trend to deploy neural network algorithms at edges and make IoT devices more intelligent than ever. Moreover, energy-harvesting technology-based IoT devices have shown the advantages of green and low-carbon economy, convenient maintenance, and theoretically infinite lifetime, etc. However, the harvested energy is often unstable, resulting in low performance due to the fact that a fixed load cannot sufficiently utilize the harvested energy. To address this problem, recent works focusing on ReRAM-based convolutional neural networks (CNN) accelerators under harvested energy have proposed hardware/software optimizations. However, those works have overlooked the mismatch between the power requirement of different CNN layers and the variation of harvested power. </p><p>Motivated by the above observation, this paper proposes a novel strategy, called <i>REC</i>, that retimes convolutional layers of CNN inferences to improve the performance and energy efficiency of energy harvesting ReRAM-based accelerators. Specifically, at the offline stage, <i>REC</i> defines different power levels to fit the power requirements of different convolutional layers. At runtime, instead of sequentially executing the convolutional layers of an inference one by one, <i>REC</i> retimes the execution timeframe of different convolutional layers so as to accommodate different CNN layers to the changing power inputs. What is more, <i>REC</i> provides a parallel strategy to fully utilize very high power inputs. Moreover, a case study is presented to show that <i>REC</i> is effective to improve the real-time accomplishment of periodical critical inferences because <i>REC</i> provides an opportunity for critical inferences to preempt the process window with a high power supply. Our experimental results show that the proposed <i>REC</i> scheme achieves an average performance improvement of 6.1 × (up to 16.5 ×) compared to the traditional strategy without the <i>REC</i> idea. The case study results show that the <i>REC</i> scheme can significantly improve the success rate of periodical critical inferences’ real-time accomplishment.</p>","PeriodicalId":50914,"journal":{"name":"ACM Transactions on Embedded Computing Systems","volume":"28 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2024-03-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140150362","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Implementing Privacy Homomorphism with Random Encoding and Computation Controlled by a Remote Secure Server 利用远程安全服务器控制的随机编码和计算实现隐私同构
IF 2 3区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-03-08 DOI: 10.1145/3651617
Kevin Hutto, Vincent Mooney

Remote IoT devices face significant security risks due to their inherent physical vulnerability. An adversarial actor with sufficient capability can monitor the devices or exfiltrate data to access sensitive information. Remotely deployed devices such as sensors need enhanced resilience against memory leakage if performing privileged tasks. To increase the security and trust of these devices we present a novel framework implementing a privacy homomorphism which creates sensor data directly in an encoded format. The sensor data is permuted at the time of creation in a manner which appears random to an observer. A separate secure server in communication with the device provides necessary information which allows the device to perform processing on the encoded data but does not allow decoding of the result. The device transmits the encoded results to the secure server which maintains the ability to interpret the results. In this paper we show how this framework works for an image sensor calculating differences between a stream of images, with initial results showing an overhead as low as only 266% in terms of throughput when compared to computing on standard unencoded numbers such as two’s complement. We further show 5,000x speedup over a recent homomorphic encryption ASIC.

远程物联网设备因其固有的物理脆弱性而面临巨大的安全风险。拥有足够能力的敌对行为者可以监控设备或外泄数据,从而访问敏感信息。传感器等远程部署的设备在执行特权任务时,需要增强对内存泄漏的抵御能力。为了提高这些设备的安全性和信任度,我们提出了一种新颖的隐私同构框架,可直接以编码格式创建传感器数据。在创建传感器数据时,会以一种在观察者看来是随机的方式对其进行排列。与设备通信的独立安全服务器提供必要的信息,允许设备对编码数据进行处理,但不允许对结果进行解码。设备将编码结果传输到安全服务器,而服务器则保持对结果的解读能力。在本文中,我们展示了这一框架如何用于图像传感器计算图像流之间的差异,初步结果显示,与计算标准的未编码数(如二进制)相比,在吞吐量方面的开销低至 266%。我们进一步显示,与最近的同态加密 ASIC 相比,速度提高了 5000 倍。
{"title":"Implementing Privacy Homomorphism with Random Encoding and Computation Controlled by a Remote Secure Server","authors":"Kevin Hutto, Vincent Mooney","doi":"10.1145/3651617","DOIUrl":"https://doi.org/10.1145/3651617","url":null,"abstract":"<p>Remote IoT devices face significant security risks due to their inherent physical vulnerability. An adversarial actor with sufficient capability can monitor the devices or exfiltrate data to access sensitive information. Remotely deployed devices such as sensors need enhanced resilience against memory leakage if performing privileged tasks. To increase the security and trust of these devices we present a novel framework implementing a privacy homomorphism which creates sensor data directly in an encoded format. The sensor data is permuted at the time of creation in a manner which appears random to an observer. A separate secure server in communication with the device provides necessary information which allows the device to perform processing on the encoded data but does not allow decoding of the result. The device transmits the encoded results to the secure server which maintains the ability to interpret the results. In this paper we show how this framework works for an image sensor calculating differences between a stream of images, with initial results showing an overhead as low as only 266% in terms of throughput when compared to computing on standard unencoded numbers such as two’s complement. We further show 5,000x speedup over a recent homomorphic encryption ASIC.</p>","PeriodicalId":50914,"journal":{"name":"ACM Transactions on Embedded Computing Systems","volume":"22 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2024-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140070766","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Toward Energy Efficient STT-MRAM-based Near Memory Computing Architecture for Embedded Systems 为嵌入式系统开发基于 STT-MRAM 的高能效近内存计算架构
IF 2 3区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-03-07 DOI: 10.1145/3650729
Yueting Li, Xueyan Wang, He Zhang, Biao Pan, Keni Qiu, Wang Kang, Jun Wang, Weisheng Zhao

Convolutional Neural Networks (CNNs) have significantly impacted embedded system applications across various domains. However, this exacerbates the real-time processing and hardware resource-constrained challenges of embedded systems. To tackle these issues, we propose spin-transfer torque magnetic random-access memory (STT-MRAM)-based near memory computing (NMC) design for embedded systems. We optimize this design from three aspects: Fast-pipelined STT-MRAM readout scheme provides higher memory bandwidth for NMC design, enhancing real-time processing capability with a non-trivial area overhead. Direct index compression format in conjunction with digital sparse matrix-vector multiplication (SpMV) accelerator supports various matrices of practical applications that alleviate computing resource requirements. Custom NMC instructions and stream converter for NMC systems dynamically adjust available hardware resources for better utilization. Experimental results demonstrate that the memory bandwidth of STT-MRAM achieves 26.7GB/s. Energy consumption and latency improvement of digital SpMV accelerator are up to 64x and 1120x across sparsity matrices spanning from 10% to 99.8%. Single-precision and double-precision elements transmission increased up to 8x and 9.6x, respectively. Furthermore, our design achieves a throughput of up to 15.9x over state-of-the-art designs.

卷积神经网络(CNN)对各个领域的嵌入式系统应用产生了重大影响。然而,这加剧了嵌入式系统的实时处理和硬件资源有限的挑战。为了解决这些问题,我们为嵌入式系统提出了基于自旋转移力矩磁性随机存取存储器(STT-MRAM)的近存计算(NMC)设计。我们从三个方面对这一设计进行了优化:快速管状 STT-MRAM 读出方案为 NMC 设计提供了更高的内存带宽,在不增加面积开销的情况下提高了实时处理能力。直接索引压缩格式与数字稀疏矩阵-矢量乘法(SpMV)加速器相结合,支持实际应用中的各种矩阵,缓解了计算资源需求。为 NMC 系统定制的 NMC 指令和流转换器可动态调整可用硬件资源,以提高利用率。实验结果表明,STT-MRAM 的内存带宽达到了 26.7GB/s。数字 SpMV 加速器的能耗和延迟在稀疏度矩阵从 10% 到 99.8% 的范围内分别提高了 64 倍和 1120 倍。单精度和双精度元素传输分别提高了 8 倍和 9.6 倍。此外,与最先进的设计相比,我们的设计实现了高达 15.9 倍的吞吐量。
{"title":"Toward Energy Efficient STT-MRAM-based Near Memory Computing Architecture for Embedded Systems","authors":"Yueting Li, Xueyan Wang, He Zhang, Biao Pan, Keni Qiu, Wang Kang, Jun Wang, Weisheng Zhao","doi":"10.1145/3650729","DOIUrl":"https://doi.org/10.1145/3650729","url":null,"abstract":"<p>Convolutional Neural Networks (CNNs) have significantly impacted embedded system applications across various domains. However, this exacerbates the real-time processing and hardware resource-constrained challenges of embedded systems. To tackle these issues, we propose spin-transfer torque magnetic random-access memory (STT-MRAM)-based near memory computing (NMC) design for embedded systems. We optimize this design from three aspects: Fast-pipelined STT-MRAM readout scheme provides higher memory bandwidth for NMC design, enhancing real-time processing capability with a non-trivial area overhead. Direct index compression format in conjunction with digital sparse matrix-vector multiplication (SpMV) accelerator supports various matrices of practical applications that alleviate computing resource requirements. Custom NMC instructions and stream converter for NMC systems dynamically adjust available hardware resources for better utilization. Experimental results demonstrate that the memory bandwidth of STT-MRAM achieves 26.7GB/s. Energy consumption and latency improvement of digital SpMV accelerator are up to 64x and 1120x across sparsity matrices spanning from 10% to 99.8%. Single-precision and double-precision elements transmission increased up to 8x and 9.6x, respectively. Furthermore, our design achieves a throughput of up to 15.9x over state-of-the-art designs.</p>","PeriodicalId":50914,"journal":{"name":"ACM Transactions on Embedded Computing Systems","volume":"50 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2024-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140055337","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Energy Management for Fault-Tolerant (m,k)-Constrained Real-Time Systems that Use Standby-Sparing 使用备用资源的容错 (m,k) 受限实时系统的能源管理
IF 2 3区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-02-21 DOI: 10.1145/3648365
Linwei Niu, Danda B. Rawat, Dakai Zhu, Jonathan Musselwhite, Zonghua Gu, Qingxu Deng

Fault tolerance, energy management, and quality of service (QoS) are essential aspects for the design of real-time embedded systems. In this work, we focus on exploring methods that can simultaneously address the above three critical issues under standby-sparing. The standby-sparing mechanism adopts a dual-processor architecture in which each processor plays the role of the backup for the other one dynamically. In this way it can provide fault tolerance subject to both permanent and transient faults. Due to its duplicate executions of the real-time jobs/tasks, the energy consumption of a standby-sparing system could be quite high. With the purpose of reducing energy under standby-sparing, we proposed three novel scheduling schemes: the first one is for (1, 1)-constrained tasks, and the second one and the third one (which can be combined into an integrated approach to maximize the overall energy reduction) are for general (m, k)-constrained tasks which require that among any k consecutive jobs of a task no more than (km) out of them could miss their deadlines. Through extensive evaluations and performance analysis, our results demonstrate that compared with the existing research, the proposed techniques can reduce energy by up to 11% for (1, 1)-constrained tasks and 25% for general (m, k)-constrained tasks while assuring (m, k)-constraints and fault tolerance as well as providing better user perceived QoS levels under standby-sparing.

容错性、能源管理和服务质量(QoS)是设计实时嵌入式系统的基本要素。在这项工作中,我们将重点探索在备用机共享机制下同时解决上述三个关键问题的方法。备援机制采用双处理器架构,每个处理器动态地扮演另一个处理器的备份角色。这样,它就能在出现永久性和瞬时性故障时提供容错。由于实时工作/任务的重复执行,备用系统的能耗可能相当高。为了降低备用系统的能耗,我们提出了三种新的调度方案:第一种方案适用于(1,1)受限任务,第二种方案和第三种方案(可合并为一种综合方法,以最大限度地降低整体能耗)适用于一般(m,k)受限任务,要求在任务的任意 k 个连续作业中,不能有超过 (k - m) 个作业错过截止日期。通过广泛的评估和性能分析,我们的结果表明,与现有研究相比,所提出的技术在确保(m,k)约束和容错性的同时,可为(1,1)约束任务减少高达 11% 的能源,为一般(m,k)约束任务减少 25% 的能源,并在备用资源共享的情况下提供更好的用户感知 QoS 水平。
{"title":"Energy Management for Fault-Tolerant (m,k)-Constrained Real-Time Systems that Use Standby-Sparing","authors":"Linwei Niu, Danda B. Rawat, Dakai Zhu, Jonathan Musselwhite, Zonghua Gu, Qingxu Deng","doi":"10.1145/3648365","DOIUrl":"https://doi.org/10.1145/3648365","url":null,"abstract":"<p>Fault tolerance, energy management, and quality of service (QoS) are essential aspects for the design of real-time embedded systems. In this work, we focus on exploring methods that can simultaneously address the above three critical issues under standby-sparing. The standby-sparing mechanism adopts a dual-processor architecture in which each processor plays the role of the backup for the other one dynamically. In this way it can provide fault tolerance subject to both permanent and transient faults. Due to its duplicate executions of the real-time jobs/tasks, the energy consumption of a standby-sparing system could be quite high. With the purpose of reducing energy under standby-sparing, we proposed three novel scheduling schemes: the first one is for (1, 1)-constrained tasks, and the second one and the third one (which can be combined into an integrated approach to maximize the overall energy reduction) are for general (<i>m</i>, <i>k</i>)-constrained tasks which require that among any <i>k</i> consecutive jobs of a task no more than (<i>k</i> − <i>m</i>) out of them could miss their deadlines. Through extensive evaluations and performance analysis, our results demonstrate that compared with the existing research, the proposed techniques can reduce energy by up to 11% for (1, 1)-constrained tasks and 25% for general (<i>m</i>, <i>k</i>)-constrained tasks while assuring (<i>m</i>, <i>k</i>)-constraints and fault tolerance as well as providing better user perceived QoS levels under standby-sparing.</p>","PeriodicalId":50914,"journal":{"name":"ACM Transactions on Embedded Computing Systems","volume":"376 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2024-02-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139923974","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Elements of Timed Pattern Matching 定时模式匹配的要素
IF 2 3区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-02-10 DOI: 10.1145/3645114
Dogan Ulus, Thomas Ferrère, Eugene Asarin, Dejan Nickovic, Oded Maler

The rise of machine learning and cloud technologies has led to a remarkable influx of data within modern cyber-physical systems. However, extracting meaningful information from this data has become a significant challenge due to its volume and complexity. Timed pattern matching has emerged as a powerful specification-based runtime verification and temporal data analysis technique to address this challenge.

In this paper, we provide a comprehensive tutorial on timed pattern matching that ranges from the underlying algebra and pattern specification languages to performance analyses and practical case studies. Analogous to textual pattern matching, timed pattern matching is the task of finding all time periods within temporal behaviors of cyber-physical systems that match a predefined pattern. Originally we introduced and solved several variants of the problem using the name of match sets, which has evolved into the concept of timed relations over the past decade. Here we first formalize and present the algebra of timed relations as a standalone mathematical tool to solve the pattern matching problem of timed pattern specifications. In particular, we show how to use the algebra of timed relations to solve the pattern matching problem for timed regular expressions and metric compass logic in a unified manner. We experimentally demonstrate that our timed pattern matching approach performs and scales well in practice. We further provide in-depth insights into the similarities and fundamental differences between monitoring and matching problems as well as regular expressions and temporal logic formulas. Finally, we illustrate the practical application of timed pattern matching through two case studies, which show how to extract structured information from temporal datasets obtained via simulations or real-world observations. These results and examples show that timed pattern matching is a rigorous and efficient technique in developing and analyzing cyber-physical systems.

机器学习和云技术的兴起导致大量数据涌入现代网络物理系统。然而,由于数据量大且复杂,从这些数据中提取有意义的信息已成为一项重大挑战。定时模式匹配作为一种强大的基于规范的运行时验证和时态数据分析技术应运而生,以应对这一挑战。在本文中,我们将全面介绍定时模式匹配,从底层代数和模式规范语言到性能分析和实际案例研究。与文本模式匹配类似,定时模式匹配的任务是在网络物理系统的时间行为中找到符合预定义模式的所有时间段。最初,我们使用匹配集的名称引入并解决了该问题的几个变体,在过去的十年中,该名称已发展成为定时关系的概念。在这里,我们首先将定时关系代数正式化,并将其作为独立的数学工具来解决定时模式规范的模式匹配问题。特别是,我们展示了如何使用定时关系代数以统一的方式解决定时正则表达式和度量罗盘逻辑的模式匹配问题。我们通过实验证明,我们的定时模式匹配方法在实践中表现出色,而且扩展性很好。我们进一步深入探讨了监控和匹配问题以及正则表达式和时态逻辑公式之间的相似性和根本区别。最后,我们通过两个案例研究说明了定时模式匹配的实际应用,展示了如何从通过模拟或实际观察获得的时态数据集中提取结构化信息。这些结果和实例表明,定时模式匹配是开发和分析网络物理系统的一项严谨而高效的技术。
{"title":"Elements of Timed Pattern Matching","authors":"Dogan Ulus, Thomas Ferrère, Eugene Asarin, Dejan Nickovic, Oded Maler","doi":"10.1145/3645114","DOIUrl":"https://doi.org/10.1145/3645114","url":null,"abstract":"<p>The rise of machine learning and cloud technologies has led to a remarkable influx of data within modern cyber-physical systems. However, extracting meaningful information from this data has become a significant challenge due to its volume and complexity. Timed pattern matching has emerged as a powerful specification-based runtime verification and temporal data analysis technique to address this challenge. </p><p>In this paper, we provide a comprehensive tutorial on timed pattern matching that ranges from the underlying algebra and pattern specification languages to performance analyses and practical case studies. Analogous to textual pattern matching, timed pattern matching is the task of finding all time periods within temporal behaviors of cyber-physical systems that match a predefined pattern. Originally we introduced and solved several variants of the problem using the name of match sets, which has evolved into the concept of timed relations over the past decade. Here we first formalize and present the algebra of timed relations as a standalone mathematical tool to solve the pattern matching problem of timed pattern specifications. In particular, we show how to use the algebra of timed relations to solve the pattern matching problem for timed regular expressions and metric compass logic in a unified manner. We experimentally demonstrate that our timed pattern matching approach performs and scales well in practice. We further provide in-depth insights into the similarities and fundamental differences between monitoring and matching problems as well as regular expressions and temporal logic formulas. Finally, we illustrate the practical application of timed pattern matching through two case studies, which show how to extract structured information from temporal datasets obtained via simulations or real-world observations. These results and examples show that timed pattern matching is a rigorous and efficient technique in developing and analyzing cyber-physical systems.</p>","PeriodicalId":50914,"journal":{"name":"ACM Transactions on Embedded Computing Systems","volume":"2 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2024-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139754108","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
ACM Transactions on Embedded Computing Systems
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1