首页 > 最新文献

2020 28th International Symposium on Modeling, Analysis, and Simulation of Computer and Telecommunication Systems (MASCOTS)最新文献

英文 中文
Adversarial Attacks in a Deep Reinforcement Learning based Cluster Scheduler 基于深度强化学习的集群调度中的对抗性攻击
Shaojun Zhang, Chen Wang, Albert Y. Zomaya
A scheduler is essential for resource management in a shared computer cluster, particularly scheduling algorithms play an important role in meeting service level objectives of user applications in large scale clusters that underlie cloud computing. Traditional cluster schedulers are often based on empirical observations of patterns of jobs running on them. It is unclear how effective they are for capturing the patterns of a variety of jobs in clouds. Recent advances in Deep Reinforcement Learning (DRL) promise a new optimization framework for a scheduler to systematically address the problem. A DRL-based scheduler can extract detailed patterns from job features and the dynamics of cloud resource utilization for better scheduling decisions. However, the deep neural network models used by the scheduler might be vulnerable to adversarial attacks. There is limited research investigating the vulnerability in DRL-based schedulers. In this paper, we give a white-box attack method to show that malicious users can exploit the scheduling vulnerability to benefit certain jobs. The proposed attack method only requires minor perturbations job features to significantly change the scheduling priority of these jobs. We implement both greedy and critical path based algorithms to facilitate the attacks to a state-of-the-art DRL based scheduler called Decima. Our extensive experiments on TPC-H workloads show a 62% and 66% success rate of attacks with the two algorithms. Successful attacks achieve a 18.6% and 17.5% completion time reduction.
调度程序对于共享计算机集群中的资源管理是必不可少的,特别是调度算法在满足云计算基础的大规模集群中用户应用程序的服务级别目标方面起着重要作用。传统的集群调度器通常基于对在其上运行的作业模式的经验观察。目前还不清楚它们在捕捉云中各种作业的模式方面有多有效。深度强化学习(DRL)的最新进展为调度程序提供了一个新的优化框架,以系统地解决这个问题。基于drl的调度器可以从作业特征和云资源利用动态中提取详细模式,从而做出更好的调度决策。然而,调度程序使用的深度神经网络模型可能容易受到对抗性攻击。关于基于drl的调度器中的漏洞的研究有限。本文给出了一种白盒攻击方法,表明恶意用户可以利用调度漏洞对某些作业有利。所提出的攻击方法只需要对作业特征进行微小的扰动,就能显著改变这些作业的调度优先级。我们实现了贪婪算法和基于关键路径的算法,以促进对最先进的基于DRL的调度程序(称为Decima)的攻击。我们在TPC-H工作负载上的大量实验表明,这两种算法的攻击成功率分别为62%和66%。成功的攻击可以减少18.6%和17.5%的完成时间。
{"title":"Adversarial Attacks in a Deep Reinforcement Learning based Cluster Scheduler","authors":"Shaojun Zhang, Chen Wang, Albert Y. Zomaya","doi":"10.1109/MASCOTS50786.2020.9285955","DOIUrl":"https://doi.org/10.1109/MASCOTS50786.2020.9285955","url":null,"abstract":"A scheduler is essential for resource management in a shared computer cluster, particularly scheduling algorithms play an important role in meeting service level objectives of user applications in large scale clusters that underlie cloud computing. Traditional cluster schedulers are often based on empirical observations of patterns of jobs running on them. It is unclear how effective they are for capturing the patterns of a variety of jobs in clouds. Recent advances in Deep Reinforcement Learning (DRL) promise a new optimization framework for a scheduler to systematically address the problem. A DRL-based scheduler can extract detailed patterns from job features and the dynamics of cloud resource utilization for better scheduling decisions. However, the deep neural network models used by the scheduler might be vulnerable to adversarial attacks. There is limited research investigating the vulnerability in DRL-based schedulers. In this paper, we give a white-box attack method to show that malicious users can exploit the scheduling vulnerability to benefit certain jobs. The proposed attack method only requires minor perturbations job features to significantly change the scheduling priority of these jobs. We implement both greedy and critical path based algorithms to facilitate the attacks to a state-of-the-art DRL based scheduler called Decima. Our extensive experiments on TPC-H workloads show a 62% and 66% success rate of attacks with the two algorithms. Successful attacks achieve a 18.6% and 17.5% completion time reduction.","PeriodicalId":272614,"journal":{"name":"2020 28th International Symposium on Modeling, Analysis, and Simulation of Computer and Telecommunication Systems (MASCOTS)","volume":"157 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115632799","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Model-Aided Learning for URLLC Transmission in Unlicensed Spectrum 非授权频谱中URLLC传输的模型辅助学习
A. Hindi, S. Elayoubi, T. Chahed
We focus in this paper on the transport of critical services in unlicensed spectrum, where stringent constraints on latency and reliability are to be met, in the context of Ultra-Reliable Low Latency Communication (URLLC). Since contention-based medium access performs poorly in the case of high traffic load, we propose a new transmission scheme where the transmitter can increase its transmission power when the delay of the packet approaches the delay constraint, increasing by that its chance of being decoded even in case of collision with other lower-power packets. We are however interested in minimizing the usage of high power transmissions, mainly to conserve energy for battery-powered devices and to limit the range of interference. Therefore, we define a transmission policy that makes use of a delay threshold after which the high-power transmission starts, and propose a new online-learning approach based on Multi-Armed Bandit (MAB) in order to identify the policy which achieves minimum energy consumption while guaranteeing reliability. However, we observe that the MAB converges slowly to the optimal policy because the loss event is rare in the load regime of interest. We then propose a model-aided learning approach where a simple analytical model helps estimating the longterm reliability resulting from an action and thus its reward. Our results show a significant enhancement of the convergence towards the optimal policy.
在本文中,我们重点研究了在超可靠低延迟通信(URLLC)的背景下,在无许可频谱中传输关键业务,其中需要满足对延迟和可靠性的严格限制。由于基于竞争的介质访问在高流量负载情况下表现不佳,我们提出了一种新的传输方案,当数据包的延迟接近延迟约束时,发射机可以增加其传输功率,即使在与其他低功率数据包碰撞的情况下,也可以增加其被解码的机会。然而,我们感兴趣的是尽量减少高功率传输的使用,主要是为了为电池供电的设备节省能量,并限制干扰的范围。因此,我们定义了一种利用延迟阈值开始大功率传输的传输策略,并提出了一种新的基于多臂班迪(MAB)的在线学习方法,以确定在保证可靠性的同时实现最小能耗的策略。然而,我们观察到MAB收敛到最优策略的速度很慢,因为在感兴趣的负载范围内损失事件很少。然后,我们提出了一种模型辅助学习方法,其中一个简单的分析模型有助于估计一个行动及其回报的长期可靠性。我们的结果表明,向最优策略的收敛性显著增强。
{"title":"Model-Aided Learning for URLLC Transmission in Unlicensed Spectrum","authors":"A. Hindi, S. Elayoubi, T. Chahed","doi":"10.1109/MASCOTS50786.2020.9285938","DOIUrl":"https://doi.org/10.1109/MASCOTS50786.2020.9285938","url":null,"abstract":"We focus in this paper on the transport of critical services in unlicensed spectrum, where stringent constraints on latency and reliability are to be met, in the context of Ultra-Reliable Low Latency Communication (URLLC). Since contention-based medium access performs poorly in the case of high traffic load, we propose a new transmission scheme where the transmitter can increase its transmission power when the delay of the packet approaches the delay constraint, increasing by that its chance of being decoded even in case of collision with other lower-power packets. We are however interested in minimizing the usage of high power transmissions, mainly to conserve energy for battery-powered devices and to limit the range of interference. Therefore, we define a transmission policy that makes use of a delay threshold after which the high-power transmission starts, and propose a new online-learning approach based on Multi-Armed Bandit (MAB) in order to identify the policy which achieves minimum energy consumption while guaranteeing reliability. However, we observe that the MAB converges slowly to the optimal policy because the loss event is rare in the load regime of interest. We then propose a model-aided learning approach where a simple analytical model helps estimating the longterm reliability resulting from an action and thus its reward. Our results show a significant enhancement of the convergence towards the optimal policy.","PeriodicalId":272614,"journal":{"name":"2020 28th International Symposium on Modeling, Analysis, and Simulation of Computer and Telecommunication Systems (MASCOTS)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114896621","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A NUMA-aware NVM File System Design for Manycore Server Applications 基于numa的多核服务器NVM文件系统设计
June Kim, Youngjae Kim, Safdar Jamil, Sungyong Park
NOVA, a state-of-the-art NVM-based file system, is known to have scalability bottlenecks when multiple I/O threads read/write data simultaneously. Recent studies have identified the cause as the coarse-grained lock adopted by NOVA to provide consistency, and proposed fine-grained range-based locks to improve the scalability of NOVA. However, these variants of NOVA only scale on Uniform Memory Access (UMA) architecture and do not scale on Non-Uniform Memory Access (NUMA) architecture. This is because NOVA has no NUMA-aware memory allocation policy and still uses non-scalable file data structures. In this paper, we propose a NUMA-aware NOVA file system which virtualizes the NVM devices located across NUMA nodes so that they can be used as a single address space. The proposed file system adopts a local-first placement policy where file data and metadata are placed preferentially on the local NVM device to reduce the remote access problem. In addition, the lock-free per-core data structures proposed in this file system allow data to be updated concurrently while mitigating the remote memory access. Extensive evaluations show that our NUMA-aware NOVA for parallel writing is scalable with respect to the increased core count and outperforms vanilla NOVA by 2.56-19.18 times.
NOVA是一种最先进的基于nvm的文件系统,当多个I/O线程同时读写数据时,它存在可伸缩性瓶颈。最近的研究将其原因确定为NOVA采用的粗粒度锁来提供一致性,并提出了细粒度的基于范围的锁来提高NOVA的可伸缩性。然而,NOVA的这些变体只能在统一内存访问(UMA)架构上扩展,而不能在非统一内存访问(NUMA)架构上扩展。这是因为NOVA没有numa感知的内存分配策略,并且仍然使用不可伸缩的文件数据结构。在本文中,我们提出了一个NUMA感知的NOVA文件系统,该系统将位于NUMA节点上的NVM设备虚拟化,使它们可以用作单个地址空间。该文件系统采用本地优先的放置策略,将文件数据和元数据优先放置在本地NVM设备上,以减少远程访问问题。此外,该文件系统中提出的无锁的每核数据结构允许在减少远程内存访问的同时并发更新数据。广泛的评估表明,我们的NUMA-aware NOVA用于并行写作是可扩展的,相对于增加的核心计数和优于香草NOVA 2.56-19.18倍。
{"title":"A NUMA-aware NVM File System Design for Manycore Server Applications","authors":"June Kim, Youngjae Kim, Safdar Jamil, Sungyong Park","doi":"10.1109/MASCOTS50786.2020.9285968","DOIUrl":"https://doi.org/10.1109/MASCOTS50786.2020.9285968","url":null,"abstract":"NOVA, a state-of-the-art NVM-based file system, is known to have scalability bottlenecks when multiple I/O threads read/write data simultaneously. Recent studies have identified the cause as the coarse-grained lock adopted by NOVA to provide consistency, and proposed fine-grained range-based locks to improve the scalability of NOVA. However, these variants of NOVA only scale on Uniform Memory Access (UMA) architecture and do not scale on Non-Uniform Memory Access (NUMA) architecture. This is because NOVA has no NUMA-aware memory allocation policy and still uses non-scalable file data structures. In this paper, we propose a NUMA-aware NOVA file system which virtualizes the NVM devices located across NUMA nodes so that they can be used as a single address space. The proposed file system adopts a local-first placement policy where file data and metadata are placed preferentially on the local NVM device to reduce the remote access problem. In addition, the lock-free per-core data structures proposed in this file system allow data to be updated concurrently while mitigating the remote memory access. Extensive evaluations show that our NUMA-aware NOVA for parallel writing is scalable with respect to the increased core count and outperforms vanilla NOVA by 2.56-19.18 times.","PeriodicalId":272614,"journal":{"name":"2020 28th International Symposium on Modeling, Analysis, and Simulation of Computer and Telecommunication Systems (MASCOTS)","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116827738","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Concept Drift and Avoiding its Negative Effects in Predictive Modeling of Failures of Electricity Production Units in Power Plants 电厂发电机组故障预测建模中的概念漂移及其避免
M. Molęda, A. Momot, Dariusz Mrozek
Ensuring the required accuracy of predictive models operating on time series is very important for industrial diagnostics systems. It is especially visible if there are a lot of models covering hundreds of devices and thousands of measurements operating under varying conditions in changing environments. In this work, we analyze the concept drift phenomenon in the context of actual measurements and predictions of the diagnostic system of boiler feed pump working in coal-fired power plants. In the practical part, we adapt algorithms and techniques operating on time series to obtain better results and reduce the negative effects of the concept drift. The results of our experiments show that the application of drift handling methods brings improvement in the effectiveness of the fault prediction process.
在工业诊断系统中,保证时间序列预测模型的精度是非常重要的。如果有许多模型涵盖数百种设备和数千种测量方法,在不断变化的环境中在不同条件下运行,这一点尤其明显。本文通过对燃煤电厂锅炉给水泵诊断系统的实际测量和预测,分析了概念漂移现象。在实践部分,我们采用了对时间序列操作的算法和技术,以获得更好的结果,并减少概念漂移的负面影响。实验结果表明,漂移处理方法的应用提高了故障预测过程的有效性。
{"title":"Concept Drift and Avoiding its Negative Effects in Predictive Modeling of Failures of Electricity Production Units in Power Plants","authors":"M. Molęda, A. Momot, Dariusz Mrozek","doi":"10.1109/MASCOTS50786.2020.9285972","DOIUrl":"https://doi.org/10.1109/MASCOTS50786.2020.9285972","url":null,"abstract":"Ensuring the required accuracy of predictive models operating on time series is very important for industrial diagnostics systems. It is especially visible if there are a lot of models covering hundreds of devices and thousands of measurements operating under varying conditions in changing environments. In this work, we analyze the concept drift phenomenon in the context of actual measurements and predictions of the diagnostic system of boiler feed pump working in coal-fired power plants. In the practical part, we adapt algorithms and techniques operating on time series to obtain better results and reduce the negative effects of the concept drift. The results of our experiments show that the application of drift handling methods brings improvement in the effectiveness of the fault prediction process.","PeriodicalId":272614,"journal":{"name":"2020 28th International Symposium on Modeling, Analysis, and Simulation of Computer and Telecommunication Systems (MASCOTS)","volume":"102 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128890023","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Voilà: Tail-Latency-Aware Fog Application Replicas Autoscaler 尾延迟感知雾应用程序副本自动缩放器
Alice Fahs, G. Pierre, E. Elmroth
Latency-sensitive fog computing applications may use replication both to scale their capacity and to place application instances as close as possible to their end users. In such geo-distributed environments, a good replica placement should maintain the tail network latency between end-user devices and their closest replica within acceptable bounds while avoiding overloaded replicas. When facing non-stationary workloads it is essential to dynamically adjust the number and locations of a fog application's replicas. We propose Voilà, a tail-Iatency-aware auto-scaler integrated in the Kubernetes orchestration system. Voila maintains a fine-grained view of the volumes of traffic generated from different user locations, and uses simple yet highly-effective procedures to maintain suitable application resources in terms of size and location.
对延迟敏感的雾计算应用程序可以使用复制来扩展其容量,并将应用程序实例放置在尽可能靠近最终用户的地方。在这样的地理分布式环境中,一个好的副本放置应该将终端用户设备与其最近的副本之间的尾网络延迟保持在可接受的范围内,同时避免副本过载。当面对非固定工作负载时,动态调整雾应用程序副本的数量和位置是必要的。我们提出了volil,这是一个集成在Kubernetes编排系统中的尾时延感知自动伸缩器。Voila维护来自不同用户位置的流量的细粒度视图,并使用简单而高效的过程来维护适当的应用程序资源的大小和位置。
{"title":"Voilà: Tail-Latency-Aware Fog Application Replicas Autoscaler","authors":"Alice Fahs, G. Pierre, E. Elmroth","doi":"10.1109/MASCOTS50786.2020.9285953","DOIUrl":"https://doi.org/10.1109/MASCOTS50786.2020.9285953","url":null,"abstract":"Latency-sensitive fog computing applications may use replication both to scale their capacity and to place application instances as close as possible to their end users. In such geo-distributed environments, a good replica placement should maintain the tail network latency between end-user devices and their closest replica within acceptable bounds while avoiding overloaded replicas. When facing non-stationary workloads it is essential to dynamically adjust the number and locations of a fog application's replicas. We propose Voilà, a tail-Iatency-aware auto-scaler integrated in the Kubernetes orchestration system. Voila maintains a fine-grained view of the volumes of traffic generated from different user locations, and uses simple yet highly-effective procedures to maintain suitable application resources in terms of size and location.","PeriodicalId":272614,"journal":{"name":"2020 28th International Symposium on Modeling, Analysis, and Simulation of Computer and Telecommunication Systems (MASCOTS)","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116045795","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
μCache: a mutable cache for SMR translation layer μCache:用于SMR转换层的可变缓存
Mohammad Hossein Hajkazemi, Mania Abdi, Peter Desnoyers
Shingled Magnetic Recording (SMR) may be combined with conventional (re-writable) recording on the same drive; in host-managed drives shipping today this capability is used to provide a small number of re-writable zones, typically totaling a few tens of GB. Although these re-writable zones are widely used by SMR-aware applications, the literature to date has ignored them and focused on fully-shingled devices. We describe μCache, an SMR translation layer (STL) using re-writable (mutable) zones to take advantage of both workload spatial and temporal locality to reduce the garbage collection overhead resulted from out-of-place writes. In μCache the volume LBA space is divided into fixed -sized buckets and, on write access, the corresponding bucket is copied (promoted) to the re-writable zones, allowing subsequent writes to the same bucket be served in - place resulting in fewer garbage collection cycles. We evaluate μCache in simulation against real-world traces and show that with appropriate parameters it is able to hold the entire write working set of most workloads in re-writable storage, virtually eliminating garbage collection overhead. We also emulate μCache by replaying its translated traces against actual drive and show that 1) it outperforms its examined counterpart, an E-region based translation approach on average by 2x and up to 5.1x, and 2) it incurs additional latency only for a small fraction of write operations, (up to 10%) when compared with conventional non-shingled disks.
带状磁记录(SMR)可以与传统(可重写)记录在同一驱动器上组合;在目前交付的主机管理驱动器中,此功能用于提供少量可重写区域,通常总计几十GB。尽管这些可重写区域被smr感知的应用广泛使用,但迄今为止的文献都忽略了它们,并将重点放在了完全封装的设备上。我们描述了μCache,这是一个SMR转换层(STL),它使用可重写(可变)区域来利用工作负载的空间和时间局域性来减少因写错位置而导致的垃圾收集开销。在μCache中,卷LBA空间被划分为固定大小的桶,并且在写访问时,相应的桶被复制(提升)到可重写区域,从而允许后续对同一桶的写入服务,从而减少垃圾收集周期。我们在模拟中对μCache与现实世界的跟踪进行了评估,并表明,使用适当的参数,它能够将大多数工作负载的整个写工作集保存在可重写存储中,实际上消除了垃圾收集开销。我们还通过在实际驱动器上重放μCache的转换迹线来模拟μCache,并表明:1)它的性能比所检查的对应对象(基于e区域的转换方法)平均高出2倍至5.1倍;2)与传统的非带状磁盘相比,它只会对一小部分写操作(高达10%)产生额外的延迟。
{"title":"μCache: a mutable cache for SMR translation layer","authors":"Mohammad Hossein Hajkazemi, Mania Abdi, Peter Desnoyers","doi":"10.1109/MASCOTS50786.2020.9285939","DOIUrl":"https://doi.org/10.1109/MASCOTS50786.2020.9285939","url":null,"abstract":"Shingled Magnetic Recording (SMR) may be combined with conventional (re-writable) recording on the same drive; in host-managed drives shipping today this capability is used to provide a small number of re-writable zones, typically totaling a few tens of GB. Although these re-writable zones are widely used by SMR-aware applications, the literature to date has ignored them and focused on fully-shingled devices. We describe μCache, an SMR translation layer (STL) using re-writable (mutable) zones to take advantage of both workload spatial and temporal locality to reduce the garbage collection overhead resulted from out-of-place writes. In μCache the volume LBA space is divided into fixed -sized buckets and, on write access, the corresponding bucket is copied (promoted) to the re-writable zones, allowing subsequent writes to the same bucket be served in - place resulting in fewer garbage collection cycles. We evaluate μCache in simulation against real-world traces and show that with appropriate parameters it is able to hold the entire write working set of most workloads in re-writable storage, virtually eliminating garbage collection overhead. We also emulate μCache by replaying its translated traces against actual drive and show that 1) it outperforms its examined counterpart, an E-region based translation approach on average by 2x and up to 5.1x, and 2) it incurs additional latency only for a small fraction of write operations, (up to 10%) when compared with conventional non-shingled disks.","PeriodicalId":272614,"journal":{"name":"2020 28th International Symposium on Modeling, Analysis, and Simulation of Computer and Telecommunication Systems (MASCOTS)","volume":"121 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115076403","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Evaluating the Performance of a State-of-the-Art Group-oriented Encryption Scheme for Dynamic Groups in an IoT Scenario 评估物联网场景中面向动态组的最先进组加密方案的性能
Thomas Prantl, Peter Ten, Lukas Iffländer, A. Dmitrienko, Samuel Kounev, Christian Krupitzer
New emerging technologies, such as autonomous driving, intelligent buildings, and smart cities, are promising to revolutionize user experience and offer new services. The world has to undergo large scale deployment of billions of things - cost-efficient intelligent sensors that will be interconnected into extensive networks and will collect and supply data to intelligent algorithms - to make it happen. To date, however, it is challenging to secure such an infrastructure for many-fold reasons, such as resource constraints of things, large scale deployment, many-to-many communication patterns, and dynamically changing communication groups. All these factors rule out most of the state-of-the-art encryption and key-management techniques. Group encryption algorithms are well-suitable for many-to-many communication patterns typical for IoT networks, and many of them can deal with dynamic groups. There are, however, very few constructions that could potentially fulfill the computational and storage constraints of IoT devices while providing sufficient scalability for large networks. The promising candidates, such as construction by Nishat et al. [1], were not evaluated using IoT platforms and under constraints typical for IoT networks. In this paper, we aim to fill this gap and present the evaluation of a state-of-the-art group-oriented encryption scheme by Nishat et al. to identify its applicability to IoT systems. In detail, we provide a measurement workflow, a revised version of the approach, and describe a reproducible hardware testbed. Using this evaluation environment, we analyze the performance of the encryption scheme in a typical IoT scenario from a group member perspective. The results show that all calculation times can be assumed to be constant and are always below 2 seconds. The memory requirement for permanent parameters can also be considered to be constant and are below 8.5 kbit in each case. However, the information that has to be stored temporarily for group updates has turned out to be the bottleneck of the scheme, since their memory requirements increase linearly with the group size.
自动驾驶、智能建筑和智慧城市等新兴技术有望彻底改变用户体验并提供新的服务。为了实现这一目标,世界必须大规模部署数十亿件物品——经济高效的智能传感器,这些传感器将被连接到广泛的网络中,并将收集和提供数据给智能算法。然而,到目前为止,由于许多原因,保护这样的基础设施是具有挑战性的,例如事物的资源约束、大规模部署、多对多通信模式以及动态更改的通信组。所有这些因素都排除了大多数最先进的加密和密钥管理技术。组加密算法非常适合物联网网络典型的多对多通信模式,其中许多算法可以处理动态组。然而,很少有结构可以潜在地满足物联网设备的计算和存储限制,同时为大型网络提供足够的可扩展性。有希望的候选方案,如Nishat等人的建设,没有使用物联网平台和物联网网络的典型约束进行评估。在本文中,我们的目标是填补这一空白,并对Nishat等人提出的最先进的面向组的加密方案进行评估,以确定其对物联网系统的适用性。详细地说,我们提供了一个测量工作流程,该方法的修订版本,并描述了一个可重复的硬件测试平台。利用该评估环境,我们从组成员的角度分析了加密方案在典型物联网场景中的性能。结果表明,所有的计算时间都可以假设为常数,并且总是小于2秒。永久参数的内存需求也可以被认为是恒定的,在每种情况下都低于8.5 kbit。然而,必须为组更新临时存储的信息已成为该方案的瓶颈,因为它们的内存需求随着组大小线性增加。
{"title":"Evaluating the Performance of a State-of-the-Art Group-oriented Encryption Scheme for Dynamic Groups in an IoT Scenario","authors":"Thomas Prantl, Peter Ten, Lukas Iffländer, A. Dmitrienko, Samuel Kounev, Christian Krupitzer","doi":"10.1109/MASCOTS50786.2020.9285948","DOIUrl":"https://doi.org/10.1109/MASCOTS50786.2020.9285948","url":null,"abstract":"New emerging technologies, such as autonomous driving, intelligent buildings, and smart cities, are promising to revolutionize user experience and offer new services. The world has to undergo large scale deployment of billions of things - cost-efficient intelligent sensors that will be interconnected into extensive networks and will collect and supply data to intelligent algorithms - to make it happen. To date, however, it is challenging to secure such an infrastructure for many-fold reasons, such as resource constraints of things, large scale deployment, many-to-many communication patterns, and dynamically changing communication groups. All these factors rule out most of the state-of-the-art encryption and key-management techniques. Group encryption algorithms are well-suitable for many-to-many communication patterns typical for IoT networks, and many of them can deal with dynamic groups. There are, however, very few constructions that could potentially fulfill the computational and storage constraints of IoT devices while providing sufficient scalability for large networks. The promising candidates, such as construction by Nishat et al. [1], were not evaluated using IoT platforms and under constraints typical for IoT networks. In this paper, we aim to fill this gap and present the evaluation of a state-of-the-art group-oriented encryption scheme by Nishat et al. to identify its applicability to IoT systems. In detail, we provide a measurement workflow, a revised version of the approach, and describe a reproducible hardware testbed. Using this evaluation environment, we analyze the performance of the encryption scheme in a typical IoT scenario from a group member perspective. The results show that all calculation times can be assumed to be constant and are always below 2 seconds. The memory requirement for permanent parameters can also be considered to be constant and are below 8.5 kbit in each case. However, the information that has to be stored temporarily for group updates has turned out to be the bottleneck of the scheme, since their memory requirements increase linearly with the group size.","PeriodicalId":272614,"journal":{"name":"2020 28th International Symposium on Modeling, Analysis, and Simulation of Computer and Telecommunication Systems (MASCOTS)","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114834565","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Non-Asymptotic Performance Analysis of Size-Based Routing Policies 基于大小的路由策略的非渐近性能分析
E. Bachmat, J. Doncel
We investigate the performance of two size-based routing policies: the Size Interval Task Assignment (SITA) and Task Assignment based on Guessing Size (TAGS). We consider a system with two servers and Bounded Pareto distributed job sizes with tail parameter 1 where the difference between the size of the largest and the smallest job is finite. We show that the ratio between the mean waiting time of TAGS over the mean waiting time of SITA is unbounded when the largest job size is large and the arrival rate times the largest job size is less than one. We provide numerical experiments that show that our theoretical findings extend to Bounded Pareto distributed job sizes with tail parameter different to 1.
我们研究了两种基于大小的路由策略的性能:大小间隔任务分配(SITA)和基于猜测大小的任务分配(TAGS)。我们考虑一个具有两个服务器和有界Pareto分布作业大小的系统,其中最大和最小作业大小之间的差是有限的,尾部参数为1。我们证明了当最大作业规模较大且到达率乘以最大作业规模小于1时,tag的平均等待时间与SITA的平均等待时间之比是无界的。我们提供的数值实验表明,我们的理论发现可以扩展到尾参数为1的有界帕累托分布作业规模。
{"title":"Non-Asymptotic Performance Analysis of Size-Based Routing Policies","authors":"E. Bachmat, J. Doncel","doi":"10.1109/MASCOTS50786.2020.9285943","DOIUrl":"https://doi.org/10.1109/MASCOTS50786.2020.9285943","url":null,"abstract":"We investigate the performance of two size-based routing policies: the Size Interval Task Assignment (SITA) and Task Assignment based on Guessing Size (TAGS). We consider a system with two servers and Bounded Pareto distributed job sizes with tail parameter 1 where the difference between the size of the largest and the smallest job is finite. We show that the ratio between the mean waiting time of TAGS over the mean waiting time of SITA is unbounded when the largest job size is large and the arrival rate times the largest job size is less than one. We provide numerical experiments that show that our theoretical findings extend to Bounded Pareto distributed job sizes with tail parameter different to 1.","PeriodicalId":272614,"journal":{"name":"2020 28th International Symposium on Modeling, Analysis, and Simulation of Computer and Telecommunication Systems (MASCOTS)","volume":"772 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134364066","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Reliable Reverse Engineering of Intel DRAM Addressing Using Performance Counters 使用性能计数器的可靠的英特尔DRAM寻址逆向工程
Christian Helm, Soramichi Akiyama, K. Taura
The memory controller of a processor translates the physical memory address to hardware components such as memory channels, ranks, and banks. This DRAM address mapping is of interest to many researchers in the fields of IT security, hardware architecture, system software, and performance tuning. However, Intel processors are using a complex and undocumented DRAM addressing. The addressing can be different for every system because it depends on many aspects such as the processor model, DIMM population on the motherboard, and BIOS settings. Thus an analysis for every individual system is necessary. In this paper, we introduce an automatic and reliable method for reverse engineering the DRAM addressing of Intel server-class processors. In contrast to existing approaches, it is reliable, measurement errors are unlikely to occur, and can be detected if they occur. Our method mainly relies on CPU hardware performance counters to precisely locate the accessed DRAM component. It eliminates the problem of wrong attribution that is common in timing based approaches. We validated our method by reversing engineering the DRAM addressing of a diverse set of Intel processors. This set includes Broadwell, Haswell, and Skylake micro-architectures, with various core counts, DIMM arrangements, and BIOS settings. We show the correctness of the determined addressing functions using micro-benchmarks that access specific DRAM components.
处理器的内存控制器将物理内存地址转换为诸如内存通道、rank和bank等硬件组件。这种DRAM地址映射引起了IT安全、硬件架构、系统软件和性能调优领域的许多研究人员的兴趣。然而,英特尔处理器使用的是一种复杂的、没有记录的DRAM寻址。每个系统的寻址可能不同,因为它取决于许多方面,例如处理器型号、主板上的DIMM数量和BIOS设置。因此,对每个单独的系统进行分析是必要的。本文介绍了一种自动、可靠的英特尔服务器级处理器DRAM寻址逆向工程方法。与现有的方法相比,它是可靠的,测量误差不太可能发生,如果发生,可以检测到。我们的方法主要依靠CPU硬件性能计数器来精确定位被访问的DRAM组件。它消除了基于时间的方法中常见的错误归因问题。我们通过对多种英特尔处理器的DRAM寻址进行逆向工程验证了我们的方法。该集合包括Broadwell, Haswell和Skylake微架构,具有各种核心计数,DIMM安排和BIOS设置。我们使用访问特定DRAM组件的微基准测试来展示确定的寻址功能的正确性。
{"title":"Reliable Reverse Engineering of Intel DRAM Addressing Using Performance Counters","authors":"Christian Helm, Soramichi Akiyama, K. Taura","doi":"10.1109/MASCOTS50786.2020.9285962","DOIUrl":"https://doi.org/10.1109/MASCOTS50786.2020.9285962","url":null,"abstract":"The memory controller of a processor translates the physical memory address to hardware components such as memory channels, ranks, and banks. This DRAM address mapping is of interest to many researchers in the fields of IT security, hardware architecture, system software, and performance tuning. However, Intel processors are using a complex and undocumented DRAM addressing. The addressing can be different for every system because it depends on many aspects such as the processor model, DIMM population on the motherboard, and BIOS settings. Thus an analysis for every individual system is necessary. In this paper, we introduce an automatic and reliable method for reverse engineering the DRAM addressing of Intel server-class processors. In contrast to existing approaches, it is reliable, measurement errors are unlikely to occur, and can be detected if they occur. Our method mainly relies on CPU hardware performance counters to precisely locate the accessed DRAM component. It eliminates the problem of wrong attribution that is common in timing based approaches. We validated our method by reversing engineering the DRAM addressing of a diverse set of Intel processors. This set includes Broadwell, Haswell, and Skylake micro-architectures, with various core counts, DIMM arrangements, and BIOS settings. We show the correctness of the determined addressing functions using micro-benchmarks that access specific DRAM components.","PeriodicalId":272614,"journal":{"name":"2020 28th International Symposium on Modeling, Analysis, and Simulation of Computer and Telecommunication Systems (MASCOTS)","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133919475","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Mobile Network Traffic Forecasting Using Artificial Neural Networks 基于人工神经网络的移动网络流量预测
Anil Kirmaz, D. Michalopoulos, Irina Balan, W. Gerstacker
Mobile communication systems need to adapt to temporally and spatially changing mobile network traffic, due to dynamic characteristics of mobile users, in order to provide high quality of service. Since these changes are not purely random, one can extract the deterministic portion and patterns from the observed network traffic to predict the future network traffic status. Such prediction can be utilized for a series of proactive network management procedures including coordinated beam management, beam activation/deactivation and load balancing. To this end, in this paper, an intelligent predictor using artificial neural networks is proposed and compared with a baseline scheme that uses linear prediction. It is shown that the neural network scheme outperforms the baseline scheme for relatively balanced data traffic between highly random and deterministic mobility patterns. For highly random or deterministic mobility patterns, the performance of the two considered schemes is similar to each other.
由于移动用户的动态特性,移动通信系统需要适应时间和空间变化的移动网络流量,以提供高质量的服务。由于这些变化不是完全随机的,因此可以从观察到的网络流量中提取确定性部分和模式,以预测未来的网络流量状态。这种预测可以用于一系列主动网络管理程序,包括协调波束管理、波束激活/去激活和负载平衡。为此,本文提出了一种基于人工神经网络的智能预测器,并与基于线性预测的基线方案进行了比较。结果表明,神经网络方案在高度随机和确定性移动模式之间相对平衡的数据流量方面优于基线方案。对于高度随机或确定性的移动模式,两种考虑的方案的性能彼此相似。
{"title":"Mobile Network Traffic Forecasting Using Artificial Neural Networks","authors":"Anil Kirmaz, D. Michalopoulos, Irina Balan, W. Gerstacker","doi":"10.1109/MASCOTS50786.2020.9285949","DOIUrl":"https://doi.org/10.1109/MASCOTS50786.2020.9285949","url":null,"abstract":"Mobile communication systems need to adapt to temporally and spatially changing mobile network traffic, due to dynamic characteristics of mobile users, in order to provide high quality of service. Since these changes are not purely random, one can extract the deterministic portion and patterns from the observed network traffic to predict the future network traffic status. Such prediction can be utilized for a series of proactive network management procedures including coordinated beam management, beam activation/deactivation and load balancing. To this end, in this paper, an intelligent predictor using artificial neural networks is proposed and compared with a baseline scheme that uses linear prediction. It is shown that the neural network scheme outperforms the baseline scheme for relatively balanced data traffic between highly random and deterministic mobility patterns. For highly random or deterministic mobility patterns, the performance of the two considered schemes is similar to each other.","PeriodicalId":272614,"journal":{"name":"2020 28th International Symposium on Modeling, Analysis, and Simulation of Computer and Telecommunication Systems (MASCOTS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128225798","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
期刊
2020 28th International Symposium on Modeling, Analysis, and Simulation of Computer and Telecommunication Systems (MASCOTS)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1