首页 > 最新文献

2020 IEEE 40th International Conference on Distributed Computing Systems (ICDCS)最新文献

英文 中文
SNAP: A Communication Efficient Distributed Machine Learning Framework for Edge Computing SNAP:用于边缘计算的高效通信分布式机器学习框架
Pub Date : 2020-11-01 DOI: 10.1109/ICDCS47774.2020.00072
Yangming Zhao, Jingyuan Fan, Lu Su, Tongyu Song, Sheng Wang, C. Qiao
More and more applications learn from the data collected by the edge devices. Conventional learning methods, such as gathering all the raw data to train an ultimate model in a centralized way, or training a target model in a distributed manner under the parameter server framework, suffer a high communication cost. In this paper, we design Select Neighbors and Parameters (SNAP), a communication efficient distributed machine learning framework, to mitigate the communication cost. A distinct feature of SNAP is that the edge servers act as peers to each other. Specifically, in SNAP, every edge server hosts a copy of the global model, trains it with the local data, and periodically updates the local parameters based on the weighted sum of the parameters from its neighbors (i.e., peers) only (i.e., without pulling the parameters from all other edge servers). Different from most of the previous works on consensus optimization in which the weight matrix to update parameter values is predefined, we propose a scheme to optimize the weight matrix based on the network topology, and hence the convergence rate can be improved. Another key idea in SNAP is that only the parameters which have been changed significantly since the last iteration will be sent to the neighbors. Both theoretical analysis and simulations show that SNAP can achieve the same accuracy performance as the centralized training method. Compared to the state-of-the-art communication-aware distributed learning scheme TernGrad, SNAP incurs a significantly lower (99.6% lower) communication cost.
越来越多的应用程序从边缘设备收集的数据中学习。传统的学习方法,如集中收集所有原始数据训练最终模型,或在参数服务器框架下以分布式方式训练目标模型,通信成本很高。在本文中,我们设计了一个通信高效的分布式机器学习框架选择邻居和参数(SNAP)来降低通信成本。SNAP的一个显著特征是边缘服务器充当彼此的对等点。具体来说,在SNAP中,每个边缘服务器都拥有全局模型的副本,使用本地数据对其进行训练,并仅根据来自其邻居(即对等体)的参数的加权和定期更新本地参数(即不从所有其他边缘服务器提取参数)。与以往大多数共识优化工作中预先定义用于更新参数值的权矩阵不同,本文提出了一种基于网络拓扑结构的权矩阵优化方案,从而提高了算法的收敛速度。SNAP中的另一个关键思想是,只有自上次迭代以来发生重大变化的参数才会发送给邻居。理论分析和仿真结果表明,该方法可以达到与集中训练方法相同的精度性能。与最先进的通信感知分布式学习方案TernGrad相比,SNAP的通信成本显著降低(降低99.6%)。
{"title":"SNAP: A Communication Efficient Distributed Machine Learning Framework for Edge Computing","authors":"Yangming Zhao, Jingyuan Fan, Lu Su, Tongyu Song, Sheng Wang, C. Qiao","doi":"10.1109/ICDCS47774.2020.00072","DOIUrl":"https://doi.org/10.1109/ICDCS47774.2020.00072","url":null,"abstract":"More and more applications learn from the data collected by the edge devices. Conventional learning methods, such as gathering all the raw data to train an ultimate model in a centralized way, or training a target model in a distributed manner under the parameter server framework, suffer a high communication cost. In this paper, we design Select Neighbors and Parameters (SNAP), a communication efficient distributed machine learning framework, to mitigate the communication cost. A distinct feature of SNAP is that the edge servers act as peers to each other. Specifically, in SNAP, every edge server hosts a copy of the global model, trains it with the local data, and periodically updates the local parameters based on the weighted sum of the parameters from its neighbors (i.e., peers) only (i.e., without pulling the parameters from all other edge servers). Different from most of the previous works on consensus optimization in which the weight matrix to update parameter values is predefined, we propose a scheme to optimize the weight matrix based on the network topology, and hence the convergence rate can be improved. Another key idea in SNAP is that only the parameters which have been changed significantly since the last iteration will be sent to the neighbors. Both theoretical analysis and simulations show that SNAP can achieve the same accuracy performance as the centralized training method. Compared to the state-of-the-art communication-aware distributed learning scheme TernGrad, SNAP incurs a significantly lower (99.6% lower) communication cost.","PeriodicalId":158630,"journal":{"name":"2020 IEEE 40th International Conference on Distributed Computing Systems (ICDCS)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125414784","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Real-Time Video Streaming using CeforeSim: Simulator to the Real World 实时视频流使用CeforeSim:模拟器到现实世界
Pub Date : 2020-11-01 DOI: 10.1109/ICDCS47774.2020.00122
Yusaku Hayamizu, K. Matsuzono, H. Asaeda
Applications of Information-Centric Networking (ICN) technology to future internet of things (IoT) and distributed edge/fog computing are widely discussed in various research committees. In this paper, we demonstrate a real-time video streaming scenario using CeforeSim, an NS-3 based ICN simulator. CeforeSim is based on Cefore, an open-source implementation of ICN, which is compliant with the CCNx packet format standardized by the IRTF ICN Research Group (ICNRG). The virtual interfaces provisioned in CeforeSim expedite seamless interaction between the simulated nodes and physical nodes that run the Cefore applications, thereby affording performance evaluations in various scenarios, such as handover of mobile nodes, large-scale sensor networks, and distributed edge/fog computing with the real environments.
信息中心网络(ICN)技术在未来物联网(IoT)和分布式边缘/雾计算中的应用在各个研究委员会中得到了广泛的讨论。在本文中,我们使用基于NS-3的ICN模拟器CeforeSim演示了一个实时视频流场景。CeforeSim基于ICN的开源实现Cefore,符合IRTF ICN研究小组(ICNRG)标准化的CCNx数据包格式。CeforeSim中提供的虚拟接口加快了模拟节点与运行Cefore应用程序的物理节点之间的无缝交互,从而在各种场景中提供性能评估,例如移动节点的切换、大规模传感器网络以及与真实环境的分布式边缘/雾计算。
{"title":"Real-Time Video Streaming using CeforeSim: Simulator to the Real World","authors":"Yusaku Hayamizu, K. Matsuzono, H. Asaeda","doi":"10.1109/ICDCS47774.2020.00122","DOIUrl":"https://doi.org/10.1109/ICDCS47774.2020.00122","url":null,"abstract":"Applications of Information-Centric Networking (ICN) technology to future internet of things (IoT) and distributed edge/fog computing are widely discussed in various research committees. In this paper, we demonstrate a real-time video streaming scenario using CeforeSim, an NS-3 based ICN simulator. CeforeSim is based on Cefore, an open-source implementation of ICN, which is compliant with the CCNx packet format standardized by the IRTF ICN Research Group (ICNRG). The virtual interfaces provisioned in CeforeSim expedite seamless interaction between the simulated nodes and physical nodes that run the Cefore applications, thereby affording performance evaluations in various scenarios, such as handover of mobile nodes, large-scale sensor networks, and distributed edge/fog computing with the real environments.","PeriodicalId":158630,"journal":{"name":"2020 IEEE 40th International Conference on Distributed Computing Systems (ICDCS)","volume":"182 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126775652","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Protecting Real-time Video Chat against Fake Facial Videos Generated by Face Reenactment 保护实时视频聊天免受面部再现产生的虚假面部视频
Pub Date : 2020-11-01 DOI: 10.1109/ICDCS47774.2020.00082
Jiacheng Shang, Jie Wu
With the rapid popularity of cameras on various devices, video chat has become one of the major ways for communication, such as online meetings. However, the recent progress of face reenactment techniques enables attackers to generate fake facial videos and use others’ identities. To protect video chats against fake facial videos, we propose a new defense system to significantly raise the bar for face reenactment-assisted attacks. Compared with existing works, our system has three major strengths. First, our system does not require extra hardware or intense computational resources. Second, it follows the normal video chat process and does not significantly degrade the user experience. Third, our system does not need to collect training data from attackers and new users, which means it can be quickly launched on new devices. We developed a prototype and conducted comprehensive evaluations. Experimental results show that our system can provide an average true acceptance rate of at least 92.5% for legitimate users and reject the attacker with mean accuracy of at least 94.4% for a single detection.
随着各种设备上摄像头的迅速普及,视频聊天已成为在线会议等主要交流方式之一。然而,最近人脸再现技术的发展使攻击者能够生成虚假的人脸视频并使用他人的身份。为了保护视频聊天免受虚假面部视频的侵害,我们提出了一种新的防御系统,以显着提高面部再现辅助攻击的门槛。与现有的工作相比,我们的系统有三大优势。首先,我们的系统不需要额外的硬件或密集的计算资源。其次,它遵循正常的视频聊天过程,不会显著降低用户体验。第三,我们的系统不需要收集攻击者和新用户的训练数据,这意味着它可以快速地在新设备上启动。我们开发了一个原型,并进行了全面的评估。实验结果表明,该系统对合法用户的平均真实接受率至少为92.5%,对攻击者的平均拒绝准确率至少为94.4%。
{"title":"Protecting Real-time Video Chat against Fake Facial Videos Generated by Face Reenactment","authors":"Jiacheng Shang, Jie Wu","doi":"10.1109/ICDCS47774.2020.00082","DOIUrl":"https://doi.org/10.1109/ICDCS47774.2020.00082","url":null,"abstract":"With the rapid popularity of cameras on various devices, video chat has become one of the major ways for communication, such as online meetings. However, the recent progress of face reenactment techniques enables attackers to generate fake facial videos and use others’ identities. To protect video chats against fake facial videos, we propose a new defense system to significantly raise the bar for face reenactment-assisted attacks. Compared with existing works, our system has three major strengths. First, our system does not require extra hardware or intense computational resources. Second, it follows the normal video chat process and does not significantly degrade the user experience. Third, our system does not need to collect training data from attackers and new users, which means it can be quickly launched on new devices. We developed a prototype and conducted comprehensive evaluations. Experimental results show that our system can provide an average true acceptance rate of at least 92.5% for legitimate users and reject the attacker with mean accuracy of at least 94.4% for a single detection.","PeriodicalId":158630,"journal":{"name":"2020 IEEE 40th International Conference on Distributed Computing Systems (ICDCS)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124975215","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Mobile Phones Know Your Keystrokes through the Sounds from Finger’s Tapping on the Screen 手机通过手指敲击屏幕的声音知道你的按键
Pub Date : 2020-11-01 DOI: 10.1109/ICDCS47774.2020.00102
Zhen Xiao, Tao Chen, Yang Liu, Zhenjiang Li
Mobile phones nowadays are equipped with at least dual microphones. We find when a user is typing on a phone, the sounds generated from the vibration caused by finger’s tapping on the screen surface can be captured by both microphones, and these recorded sounds alone are informative enough to infer the user’s keystrokes. This ability can be leveraged to enable useful application designs, while it also raises a crucial privacy risk that the private information typed by users on mobile phones has a great potential to be leaked through such a recognition ability. In this paper, we address two key design issues and demonstrate, more importantly alarm people, that this risk is possible, which could be related to many of us when we use our mobile phones. We implement our proposed techniques in a prototype system and conduct extensive experiments. The evaluation results indicate promising successful rates for more than 4000 keystrokes from different users on various types of mobile phones.
现在的手机至少配备了双麦克风。我们发现,当用户在手机上打字时,两个麦克风都能捕捉到手指敲击屏幕表面所产生的振动声音,而这些记录下来的声音本身就足以推断出用户的键盘敲击。这种能力可以用来实现有用的应用程序设计,但它也带来了一个至关重要的隐私风险,即用户在移动电话上输入的私人信息极有可能通过这种识别能力被泄露。在本文中,我们解决了两个关键的设计问题,并证明,更重要的是警告人们,这种风险是可能的,这可能与我们许多人在使用手机时有关。我们在原型系统中实现了我们提出的技术,并进行了广泛的实验。评估结果显示,在不同类型的手机上,来自不同用户的4000多个按键的成功率很高。
{"title":"Mobile Phones Know Your Keystrokes through the Sounds from Finger’s Tapping on the Screen","authors":"Zhen Xiao, Tao Chen, Yang Liu, Zhenjiang Li","doi":"10.1109/ICDCS47774.2020.00102","DOIUrl":"https://doi.org/10.1109/ICDCS47774.2020.00102","url":null,"abstract":"Mobile phones nowadays are equipped with at least dual microphones. We find when a user is typing on a phone, the sounds generated from the vibration caused by finger’s tapping on the screen surface can be captured by both microphones, and these recorded sounds alone are informative enough to infer the user’s keystrokes. This ability can be leveraged to enable useful application designs, while it also raises a crucial privacy risk that the private information typed by users on mobile phones has a great potential to be leaked through such a recognition ability. In this paper, we address two key design issues and demonstrate, more importantly alarm people, that this risk is possible, which could be related to many of us when we use our mobile phones. We implement our proposed techniques in a prototype system and conduct extensive experiments. The evaluation results indicate promising successful rates for more than 4000 keystrokes from different users on various types of mobile phones.","PeriodicalId":158630,"journal":{"name":"2020 IEEE 40th International Conference on Distributed Computing Systems (ICDCS)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133914314","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
DASH: A Universal Intersection Traffic Management System for Autonomous Vehicles DASH:自动驾驶汽车通用路口交通管理系统
Pub Date : 2020-11-01 DOI: 10.1109/ICDCS47774.2020.00048
Jian Kang, D. Lin
Waiting in a long queue at a traffic light has been a common and frustrating experience of the majority of daily commuters, which not only wastes valuable time but also pollutes our environments. With the advances in autonomous vehicles and their collaboration capabilities, the previous jamming intersection has a great potential to be turned into weaving traffic flows that no longer need to stop. Towards this envision, we propose a novel autonomous vehicle traffic coordination system called DASH. Specifically, DASH has a comprehensive model to represent intersections and vehicle status. It can constantly process a large volume of vehicle information of various kinds, resolve scheduling conflicts of all vehicles coming towards the intersection, and generate the optimal travel plan for each individual vehicle in real time to guide vehicles passing intersections in a safe and highly efficient way. Unlike existing works on the autonomous traffic control which are limited to certain types of intersections and lack considerations of practicability, our proposed DASH algorithm is universal for any kind of intersections yields the near-maximum throughput while still ensuring riding comfort that prevents sudden stop and acceleration. We have conducted extensive experiments to evaluate the DASH system in the scenarios of different types of intersections and different traffic flows. Our experimental results demonstrate its practicality, effectiveness, and efficiency.
对于大多数日常通勤者来说,在交通灯前排长队是一种常见而令人沮丧的经历,这不仅浪费了宝贵的时间,而且污染了我们的环境。随着自动驾驶汽车及其协同能力的进步,以前的拥堵路口有很大的潜力变成不再需要停车的编织交通流。为了实现这一愿景,我们提出了一种名为DASH的新型自动车辆交通协调系统。具体来说,DASH有一个全面的模型来表示交叉口和车辆状态。它可以不断处理大量的各类车辆信息,解决所有驶往交叉口的车辆的调度冲突,实时生成每辆车的最优出行计划,引导车辆安全高效地通过交叉口。与现有的自动交通控制工作不同,这些工作仅限于某些类型的十字路口,缺乏实用性的考虑,我们提出的DASH算法适用于任何类型的十字路口,在保证驾驶舒适性的同时,还能产生接近最大的吞吐量,防止突然停车和加速。我们进行了大量的实验,以评估DASH系统在不同类型的十字路口和不同的交通流量的场景。实验结果证明了该方法的实用性、有效性和高效性。
{"title":"DASH: A Universal Intersection Traffic Management System for Autonomous Vehicles","authors":"Jian Kang, D. Lin","doi":"10.1109/ICDCS47774.2020.00048","DOIUrl":"https://doi.org/10.1109/ICDCS47774.2020.00048","url":null,"abstract":"Waiting in a long queue at a traffic light has been a common and frustrating experience of the majority of daily commuters, which not only wastes valuable time but also pollutes our environments. With the advances in autonomous vehicles and their collaboration capabilities, the previous jamming intersection has a great potential to be turned into weaving traffic flows that no longer need to stop. Towards this envision, we propose a novel autonomous vehicle traffic coordination system called DASH. Specifically, DASH has a comprehensive model to represent intersections and vehicle status. It can constantly process a large volume of vehicle information of various kinds, resolve scheduling conflicts of all vehicles coming towards the intersection, and generate the optimal travel plan for each individual vehicle in real time to guide vehicles passing intersections in a safe and highly efficient way. Unlike existing works on the autonomous traffic control which are limited to certain types of intersections and lack considerations of practicability, our proposed DASH algorithm is universal for any kind of intersections yields the near-maximum throughput while still ensuring riding comfort that prevents sudden stop and acceleration. We have conducted extensive experiments to evaluate the DASH system in the scenarios of different types of intersections and different traffic flows. Our experimental results demonstrate its practicality, effectiveness, and efficiency.","PeriodicalId":158630,"journal":{"name":"2020 IEEE 40th International Conference on Distributed Computing Systems (ICDCS)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129839250","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
TinyEVM: Off-Chain Smart Contracts on Low-Power IoT Devices TinyEVM:低功耗物联网设备的链下智能合约
Pub Date : 2020-11-01 DOI: 10.1109/ICDCS47774.2020.00025
Christos Profentzas, M. Almgren, O. Landsiedel
With the rise of the Internet of Things (IoT), billions of devices ranging from simple sensors to smart-phones will participate in billions of micropayments. However, current centralized solutions are unable to handle a massive number of micropayments from untrusted devices.Blockchains are promising technologies suitable for solving some of these challenges. Particularly, permissionless blockchains such as Ethereum and Bitcoin have drawn the attention of the research community. However, the increasingly large-scale deployments of blockchain reveal some of their scalability limitations. Prominent proposals to scale the payment system include off-chain protocols such as payment channels. However, the leading proposals assume powerful nodes with an always-on connection and frequent synchronization. These assumptions require in practice significant communication, memory, and computation capacity, whereas IoT devices face substantial constraints in these areas. Existing approaches also do not capture the logic and process of IoT, where applications need to process locally collected sensor data to allow for full use of IoT micro-payments.In this paper, we present TinyEVM, a novel system to generate and execute off-chain smart contracts based on sensor data. TinyEVM’s goal is to enable IoT devices to perform micro-payments and, at the same time, address the device constraints. We investigate the trade-offs of executing smart contracts on low-power IoT devices using TinyEVM. We test our system with 7,000 publicly verified smart contracts, where TinyEVM achieves to deploy 93% of them without any modification. Finally, we evaluate the execution of off-chain smart contracts in terms of run-time performance, energy, and memory requirements on IoT devices. Notably, we find that low-power devices can deploy a smart contract in 215 ms on average, and they can complete an off-chain payment in 584 ms on average.
随着物联网(IoT)的兴起,从简单的传感器到智能手机,数十亿的设备将参与数十亿的小额支付。然而,目前的中心化解决方案无法处理来自不受信任设备的大量小额支付。区块链是适合解决其中一些挑战的有前途的技术。特别是,以太坊和比特币等无需许可的区块链引起了研究界的关注。然而,区块链的日益大规模部署揭示了其可扩展性的一些限制。扩展支付系统的突出建议包括链下协议,如支付渠道。然而,领先的建议假设强大的节点具有始终在线的连接和频繁的同步。这些假设在实践中需要大量的通信、内存和计算能力,而物联网设备在这些领域面临着实质性的限制。现有的方法也没有捕捉到物联网的逻辑和过程,其中应用程序需要处理本地收集的传感器数据,以充分利用物联网小额支付。在本文中,我们提出了TinyEVM,这是一个基于传感器数据生成和执行链下智能合约的新系统。TinyEVM的目标是使物联网设备能够执行小额支付,同时解决设备的限制。我们研究了使用TinyEVM在低功耗物联网设备上执行智能合约的权衡。我们用7000个公开验证的智能合约测试了我们的系统,TinyEVM在没有任何修改的情况下实现了93%的部署。最后,我们根据物联网设备的运行时性能、能源和内存需求评估链下智能合约的执行。值得注意的是,我们发现低功耗设备平均可以在215毫秒内部署智能合约,平均可以在584毫秒内完成链下支付。
{"title":"TinyEVM: Off-Chain Smart Contracts on Low-Power IoT Devices","authors":"Christos Profentzas, M. Almgren, O. Landsiedel","doi":"10.1109/ICDCS47774.2020.00025","DOIUrl":"https://doi.org/10.1109/ICDCS47774.2020.00025","url":null,"abstract":"With the rise of the Internet of Things (IoT), billions of devices ranging from simple sensors to smart-phones will participate in billions of micropayments. However, current centralized solutions are unable to handle a massive number of micropayments from untrusted devices.Blockchains are promising technologies suitable for solving some of these challenges. Particularly, permissionless blockchains such as Ethereum and Bitcoin have drawn the attention of the research community. However, the increasingly large-scale deployments of blockchain reveal some of their scalability limitations. Prominent proposals to scale the payment system include off-chain protocols such as payment channels. However, the leading proposals assume powerful nodes with an always-on connection and frequent synchronization. These assumptions require in practice significant communication, memory, and computation capacity, whereas IoT devices face substantial constraints in these areas. Existing approaches also do not capture the logic and process of IoT, where applications need to process locally collected sensor data to allow for full use of IoT micro-payments.In this paper, we present TinyEVM, a novel system to generate and execute off-chain smart contracts based on sensor data. TinyEVM’s goal is to enable IoT devices to perform micro-payments and, at the same time, address the device constraints. We investigate the trade-offs of executing smart contracts on low-power IoT devices using TinyEVM. We test our system with 7,000 publicly verified smart contracts, where TinyEVM achieves to deploy 93% of them without any modification. Finally, we evaluate the execution of off-chain smart contracts in terms of run-time performance, energy, and memory requirements on IoT devices. Notably, we find that low-power devices can deploy a smart contract in 215 ms on average, and they can complete an off-chain payment in 584 ms on average.","PeriodicalId":158630,"journal":{"name":"2020 IEEE 40th International Conference on Distributed Computing Systems (ICDCS)","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133756643","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Exploiting Symbolic Execution to Accelerate Deterministic Databases 利用符号执行加速确定性数据库
Pub Date : 2020-11-01 DOI: 10.1109/ICDCS47774.2020.00040
S. Issa, Miguel Viegas, Pedro Raminhas, Nuno Machado, M. Matos, P. Romano
Deterministic databases (DDs) are a promising approach for replicating data across different replicas. A fundamental component of DDs is a deterministic concurrency control algorithm that, given a set of transactions in a specific order, guarantees that their execution always results in the same serial order. State-of-the-art approaches either rely on single threaded execution or on the knowledge of read- and write-sets of transactions to achieve this goal. The former yields poor performance in multi-core machines while the latter requires either manual inputs from the user — a time-consuming and error prone task — or a reconnaissance phase that increases both the latency and abort rates of transactions.In this paper, we present Prognosticator, a novel deterministic database system. Rather than relying on manual transaction classification or an expert programmer, Prognosticator employs Symbolic Execution to build fine-grained transaction profiles (at the key-level). These profiles are then used by Prognosticator’s novel deterministic concurrency control algorithm to execute transactions with a high degree of parallelism.Our experimental evaluation, based on both TPC-C and RUBiS benchmarks, shows that Prognosticator can achieve up to 5× higher throughput with respect to state-of-the-art solutions.
确定性数据库(dd)是跨不同副本复制数据的一种很有前途的方法。dd的一个基本组件是确定性并发控制算法,给定一组按特定顺序执行的事务,该算法保证它们的执行总是以相同的顺序执行。最先进的方法要么依赖于单线程执行,要么依赖于对事务读写集的了解来实现这一目标。前者在多核机器上的性能很差,而后者要么需要用户手动输入——这是一项耗时且容易出错的任务——要么需要一个侦察阶段,这会增加事务的延迟和中断率。本文提出了一种新的确定性数据库系统Prognosticator。与依赖手动事务分类或专业程序员不同,Prognosticator使用Symbolic Execution来构建细粒度的事务配置文件(在键级别)。然后,Prognosticator的新型确定性并发控制算法使用这些概要文件以高度并行的方式执行事务。我们基于TPC-C和RUBiS基准的实验评估表明,与最先进的解决方案相比,Prognosticator可以实现高达5倍的吞吐量。
{"title":"Exploiting Symbolic Execution to Accelerate Deterministic Databases","authors":"S. Issa, Miguel Viegas, Pedro Raminhas, Nuno Machado, M. Matos, P. Romano","doi":"10.1109/ICDCS47774.2020.00040","DOIUrl":"https://doi.org/10.1109/ICDCS47774.2020.00040","url":null,"abstract":"Deterministic databases (DDs) are a promising approach for replicating data across different replicas. A fundamental component of DDs is a deterministic concurrency control algorithm that, given a set of transactions in a specific order, guarantees that their execution always results in the same serial order. State-of-the-art approaches either rely on single threaded execution or on the knowledge of read- and write-sets of transactions to achieve this goal. The former yields poor performance in multi-core machines while the latter requires either manual inputs from the user — a time-consuming and error prone task — or a reconnaissance phase that increases both the latency and abort rates of transactions.In this paper, we present Prognosticator, a novel deterministic database system. Rather than relying on manual transaction classification or an expert programmer, Prognosticator employs Symbolic Execution to build fine-grained transaction profiles (at the key-level). These profiles are then used by Prognosticator’s novel deterministic concurrency control algorithm to execute transactions with a high degree of parallelism.Our experimental evaluation, based on both TPC-C and RUBiS benchmarks, shows that Prognosticator can achieve up to 5× higher throughput with respect to state-of-the-art solutions.","PeriodicalId":158630,"journal":{"name":"2020 IEEE 40th International Conference on Distributed Computing Systems (ICDCS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130178799","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Soteria: Detecting Adversarial Examples in Control Flow Graph-based Malware Classifiers Soteria:在基于控制流图的恶意软件分类器中检测对抗性示例
Pub Date : 2020-11-01 DOI: 10.1109/ICDCS47774.2020.00089
Hisham Alasmary, Ahmed A. Abusnaina, Rhongho Jang, M. Abuhamad, Afsah Anwar, Daehun Nyang, David A. Mohaisen
Deep learning algorithms have been widely used for security applications, including malware detection and classification. Recent results have shown that those algorithms are vulnerable to adversarial examples, whereby a small perturbation in the input sample may result in misclassification. In this paper, we systematically tackle the problem of adversarial examples detection in the control flow graph (CFG) based classifiers for malware detection using Soteria. Unique to Soteria, we use both density-based and level-based labels for CFG labeling to yield a consistent representation, a random walk-based traversal approach for feature extraction, and n-gram based module for feature representation. End-to-end, Soteria’s representation ensures a simple yet powerful randomization property of the used classification features, making it difficult even for a powerful adversary to launch a successful attack. Soteria also employs a deep learning approach, consisting of an auto-encoder for detecting adversarial examples, and a CNN architecture for detecting and classifying malware samples. We evaluate the performance of Soteria, using a large dataset consisting of 16,814 IoT samples, and demonstrate its superiority in comparison with state-of-the-art approaches. In particular, Soteria yields an accuracy rate of 97.79% for detecting AEs, and 99.91% overall accuracy for classification malware families.
深度学习算法已广泛用于安全应用,包括恶意软件检测和分类。最近的结果表明,这些算法容易受到对抗性示例的影响,因此输入样本中的一个小扰动可能导致错误分类。本文利用Soteria系统地解决了基于控制流图(CFG)分类器的恶意软件检测中的对抗性样本检测问题。Soteria独有的是,我们使用基于密度和基于层次的标签进行CFG标记以产生一致的表示,使用基于随机行走的遍历方法进行特征提取,使用基于n-gram的模块进行特征表示。端到端,Soteria的表示确保了所使用分类特征的简单而强大的随机化属性,即使是强大的对手也很难发起成功的攻击。Soteria还采用了一种深度学习方法,包括用于检测对抗性示例的自动编码器和用于检测和分类恶意软件样本的CNN架构。我们使用由16,814个物联网样本组成的大型数据集评估Soteria的性能,并与最先进的方法相比,展示了其优势。特别是,Soteria在检测ae方面的准确率为97.79%,在分类恶意软件家族方面的总体准确率为99.91%。
{"title":"Soteria: Detecting Adversarial Examples in Control Flow Graph-based Malware Classifiers","authors":"Hisham Alasmary, Ahmed A. Abusnaina, Rhongho Jang, M. Abuhamad, Afsah Anwar, Daehun Nyang, David A. Mohaisen","doi":"10.1109/ICDCS47774.2020.00089","DOIUrl":"https://doi.org/10.1109/ICDCS47774.2020.00089","url":null,"abstract":"Deep learning algorithms have been widely used for security applications, including malware detection and classification. Recent results have shown that those algorithms are vulnerable to adversarial examples, whereby a small perturbation in the input sample may result in misclassification. In this paper, we systematically tackle the problem of adversarial examples detection in the control flow graph (CFG) based classifiers for malware detection using Soteria. Unique to Soteria, we use both density-based and level-based labels for CFG labeling to yield a consistent representation, a random walk-based traversal approach for feature extraction, and n-gram based module for feature representation. End-to-end, Soteria’s representation ensures a simple yet powerful randomization property of the used classification features, making it difficult even for a powerful adversary to launch a successful attack. Soteria also employs a deep learning approach, consisting of an auto-encoder for detecting adversarial examples, and a CNN architecture for detecting and classifying malware samples. We evaluate the performance of Soteria, using a large dataset consisting of 16,814 IoT samples, and demonstrate its superiority in comparison with state-of-the-art approaches. In particular, Soteria yields an accuracy rate of 97.79% for detecting AEs, and 99.91% overall accuracy for classification malware families.","PeriodicalId":158630,"journal":{"name":"2020 IEEE 40th International Conference on Distributed Computing Systems (ICDCS)","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114627043","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 28
Adaptive Precision Training for Resource Constrained Devices 资源受限设备的自适应精确训练
Pub Date : 2020-11-01 DOI: 10.1109/ICDCS47774.2020.00185
Tian Huang, Tao Luo, Joey Tianyi Zhou
Learn in-situ is a growing trend for Edge AI. Training deep neural network (DNN) on edge devices is challenging because both energy and memory are constrained. Low precision training helps to reduce the energy cost of a single training iteration, but that does not necessarily translate to energy savings for the whole training process, because low precision could slows down the convergence rate. One evidence is that most works for low precision training keep an fp32 copy of the model during training, which in turn imposes memory requirements on edge devices. In this work we propose Adaptive Precision Training. It is able to save both total training energy cost and memory usage at the same time. We use model of the same precision for both forward and backward pass in order to reduce memory usage for training. Through evaluating the progress of training, APT allocates layer-wise precision dynamically so that the model learns quicker for longer time. APT provides an application specific hyper-parameter for users to play trade-off between training energy cost, memory usage and accuracy. Experiment shows that APT achieves more than 50% saving on training energy and memory usage with limited accuracy loss. 20% more savings of training energy and memory usage can be achieved in return for a 1% sacrifice in accuracy loss.
原位学习是边缘人工智能的发展趋势。在边缘设备上训练深度神经网络(DNN)具有挑战性,因为能量和内存都受到限制。低精度训练有助于减少单个训练迭代的能量成本,但这并不一定转化为整个训练过程的能量节约,因为低精度可能会减慢收敛速度。一个证据是,大多数用于低精度训练的工作在训练期间保留了模型的fp32副本,这反过来又对边缘设备施加了内存要求。在这项工作中,我们提出自适应精确训练。它能够同时节省总训练能量成本和内存使用。我们对前向和后向传递使用相同精度的模型,以减少训练时的内存使用。APT通过对训练进度的评估,动态分配分层精度,使模型学习速度更快,学习时间更长。APT为用户提供了一个特定于应用的超参数,在训练能量成本、内存使用和准确性之间进行权衡。实验表明,APT在精度损失有限的情况下,节省了50%以上的训练能量和内存使用。以1%的准确度损失为代价,可以节省20%的训练能量和内存使用。
{"title":"Adaptive Precision Training for Resource Constrained Devices","authors":"Tian Huang, Tao Luo, Joey Tianyi Zhou","doi":"10.1109/ICDCS47774.2020.00185","DOIUrl":"https://doi.org/10.1109/ICDCS47774.2020.00185","url":null,"abstract":"Learn in-situ is a growing trend for Edge AI. Training deep neural network (DNN) on edge devices is challenging because both energy and memory are constrained. Low precision training helps to reduce the energy cost of a single training iteration, but that does not necessarily translate to energy savings for the whole training process, because low precision could slows down the convergence rate. One evidence is that most works for low precision training keep an fp32 copy of the model during training, which in turn imposes memory requirements on edge devices. In this work we propose Adaptive Precision Training. It is able to save both total training energy cost and memory usage at the same time. We use model of the same precision for both forward and backward pass in order to reduce memory usage for training. Through evaluating the progress of training, APT allocates layer-wise precision dynamically so that the model learns quicker for longer time. APT provides an application specific hyper-parameter for users to play trade-off between training energy cost, memory usage and accuracy. Experiment shows that APT achieves more than 50% saving on training energy and memory usage with limited accuracy loss. 20% more savings of training energy and memory usage can be achieved in return for a 1% sacrifice in accuracy loss.","PeriodicalId":158630,"journal":{"name":"2020 IEEE 40th International Conference on Distributed Computing Systems (ICDCS)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134366924","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Self-Stabilizing Set-Constrained Delivery Broadcast (extended abstract) 自稳定集约束交付广播(扩展抽象)
Pub Date : 2020-11-01 DOI: 10.1109/ICDCS47774.2020.00080
Oskar Lundström, M. Raynal, E. Schiller
Fault-tolerant distributed applications require communication abstractions with provable guarantees on message deliveries. For example, Set-Constrained Delivery Broadcast (SCD-broadcast) is a communication abstraction for broadcasting messages in a manner that, if a process delivers a set of messages that includes m and later delivers a set of messages that includes m , no process delivers first a set of messages that includes m′ and later a set of messages that includes m.Imbs et al. proposed this communication abstraction and its first implementation. They have demonstrated that SCD-broadcast has the computational power of read/write registers and allows for an easy building of distributed objects such as snapshot objects and consistent counters. Imbs et al. focused on fault-tolerant implementations for asynchronous message-passing systems that are prone to process crashes. This paper aims to design an even more robust SCD-broadcast communication abstraction, namely a self-stabilizing SCD-broadcast. In addition to process and communication failures, self-stabilizing algorithms can recover after the occurrence of arbitrary transient faults; these faults represent any violation of the assumptions according to which the system was designed to operate (as long as the algorithm code stays intact).This work proposes the first self-stabilizing SCD-broadcast algorithm for asynchronous message-passing systems that are prone to process crash failures. The proposed self-stabilizing SCD-broadcast algorithm has an $mathcal{O}(1)$ stabilization time (in terms of asynchronous cycles). The communication costs of our algorithm are similar to the ones of the non-self-stabilizing state-of-the-art. The main differences are that our proposal considers repeated gossiping of $mathcal{O}(1)$ bits messages and deals with bounded space (which is a prerequisite for self-stabilization). We advance the state-of-the-art also by two new self-stabilizing applications: an atomic construction of snapshot objects and sequentially consistent counters.
容错的分布式应用程序需要对消息交付具有可证明保证的通信抽象。例如,set - constrained Delivery Broadcast (SCD-broadcast)是一种通信抽象,它以这样的方式广播消息:如果一个进程发送了一组包含m的消息,然后又发送了一组包含m的消息,那么没有一个进程先发送一组包含m的消息,然后再发送一组包含m的消息。imbs等人提出了这种通信抽象及其第一个实现。他们已经证明,SCD-broadcast具有读/写寄存器的计算能力,并且允许轻松构建分布式对象,如快照对象和一致计数器。Imbs等人专注于容易发生进程崩溃的异步消息传递系统的容错实现。本文旨在设计一个更健壮的scd广播通信抽象,即自稳定scd广播。除流程和通信故障外,自稳定算法可以在任意暂态故障发生后恢复;这些错误表示任何对系统设计运行所依据的假设的违反(只要算法代码保持完整)。这项工作提出了第一个自稳定的scd广播算法,用于容易发生进程崩溃故障的异步消息传递系统。所提出的自稳定scd广播算法具有$mathcal{O}(1)$稳定时间(以异步周期表示)。我们算法的通信成本与非自稳定状态的通信成本相似。主要区别在于我们的建议考虑$mathcal{O}(1)$ bits消息的重复八卦,并处理有界空间(这是自稳定的先决条件)。我们还通过两个新的自稳定应用程序推进了最先进的技术:快照对象的原子构造和顺序一致的计数器。
{"title":"Self-Stabilizing Set-Constrained Delivery Broadcast (extended abstract)","authors":"Oskar Lundström, M. Raynal, E. Schiller","doi":"10.1109/ICDCS47774.2020.00080","DOIUrl":"https://doi.org/10.1109/ICDCS47774.2020.00080","url":null,"abstract":"Fault-tolerant distributed applications require communication abstractions with provable guarantees on message deliveries. For example, Set-Constrained Delivery Broadcast (SCD-broadcast) is a communication abstraction for broadcasting messages in a manner that, if a process delivers a set of messages that includes m and later delivers a set of messages that includes m , no process delivers first a set of messages that includes m′ and later a set of messages that includes m.Imbs et al. proposed this communication abstraction and its first implementation. They have demonstrated that SCD-broadcast has the computational power of read/write registers and allows for an easy building of distributed objects such as snapshot objects and consistent counters. Imbs et al. focused on fault-tolerant implementations for asynchronous message-passing systems that are prone to process crashes. This paper aims to design an even more robust SCD-broadcast communication abstraction, namely a self-stabilizing SCD-broadcast. In addition to process and communication failures, self-stabilizing algorithms can recover after the occurrence of arbitrary transient faults; these faults represent any violation of the assumptions according to which the system was designed to operate (as long as the algorithm code stays intact).This work proposes the first self-stabilizing SCD-broadcast algorithm for asynchronous message-passing systems that are prone to process crash failures. The proposed self-stabilizing SCD-broadcast algorithm has an $mathcal{O}(1)$ stabilization time (in terms of asynchronous cycles). The communication costs of our algorithm are similar to the ones of the non-self-stabilizing state-of-the-art. The main differences are that our proposal considers repeated gossiping of $mathcal{O}(1)$ bits messages and deals with bounded space (which is a prerequisite for self-stabilization). We advance the state-of-the-art also by two new self-stabilizing applications: an atomic construction of snapshot objects and sequentially consistent counters.","PeriodicalId":158630,"journal":{"name":"2020 IEEE 40th International Conference on Distributed Computing Systems (ICDCS)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132876884","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
期刊
2020 IEEE 40th International Conference on Distributed Computing Systems (ICDCS)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1