首页 > 最新文献

Proceedings of the 1st International Workshop on Edge Systems, Analytics and Networking最新文献

英文 中文
EdgeEye
Peng Liu, Bozhao Qi, Suman Banerjee
Deep learning with Deep Neural Networks (DNNs) can achieve much higher accuracy on many computer vision tasks than classic machine learning algorithms. Because of the high demand for both computation and storage resources, DNNs are often deployed in the cloud. Unfortunately, executing deep learning inference in the cloud, especially for real-time video analysis, often incurs high bandwidth consumption, high latency, reliability issues, and privacy concerns. Moving the DNNs close to the data source with an edge computing paradigm is a good approach to address those problems. The lack of an open source framework with a high-level API also complicates the deployment of deep learning-enabled service at the Internet edge. This paper presents EdgeEye, an edge-computing framework for real-time intelligent video analytics applications. EdgeEye provides a high-level, task-specific API for developers so that they can focus solely on application logic. EdgeEye does so by enabling developers to transform models trained with popular deep learning frameworks to deployable components with minimal effort. It leverages the optimized inference engines from industry to achieve the optimized inference performance and efficiency.
{"title":"EdgeEye","authors":"Peng Liu, Bozhao Qi, Suman Banerjee","doi":"10.1145/3213344.3213345","DOIUrl":"https://doi.org/10.1145/3213344.3213345","url":null,"abstract":"Deep learning with Deep Neural Networks (DNNs) can achieve much higher accuracy on many computer vision tasks than classic machine learning algorithms. Because of the high demand for both computation and storage resources, DNNs are often deployed in the cloud. Unfortunately, executing deep learning inference in the cloud, especially for real-time video analysis, often incurs high bandwidth consumption, high latency, reliability issues, and privacy concerns. Moving the DNNs close to the data source with an edge computing paradigm is a good approach to address those problems. The lack of an open source framework with a high-level API also complicates the deployment of deep learning-enabled service at the Internet edge. This paper presents EdgeEye, an edge-computing framework for real-time intelligent video analytics applications. EdgeEye provides a high-level, task-specific API for developers so that they can focus solely on application logic. EdgeEye does so by enabling developers to transform models trained with popular deep learning frameworks to deployable components with minimal effort. It leverages the optimized inference engines from industry to achieve the optimized inference performance and efficiency.","PeriodicalId":433649,"journal":{"name":"Proceedings of the 1st International Workshop on Edge Systems, Analytics and Networking","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115613870","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 87
Semi-Edge: From Edge Caching to Hierarchical Caching in Network Fog 半边缘:从边缘缓存到网络雾中的分层缓存
Yining Hua, L. Guan, K. Kyriakopoulos
In recent content delivery mechanisms, popular contents tend to be placed closer to the users for better delivery performance and lower network resource occupation. Caching mechanisms in Content Delivery Networks (CDN), Mobile Edge Clouds (MECs) and fog computing have implemented edge caching paradigm for different application scenarios. However, state-of-the-art caching mechanisms in literature are mostly bounded by application scenarios. With the rapid development of heterogeneous networks, the lack of uniform caching management has become an issue. Therefore, a novel caching mechanism, Semi-Edge caching (SE), is proposed in this paper. SE caching mechanism is based on in-network caching technique and it could be generically applied into various types of network fog. Furthermore, two content allocation strategies, SE-U (unicast) and SE-B (broadcast), are proposed within SE mechanism. The performance of SE-U and SE-B are evaluated in three typical topologies with various scenario contexts. Compared to edge caching, SE can reduce latency by 7% and increase cache hit ratio by 45%.
在最近的内容交付机制中,流行的内容往往被放置在离用户更近的地方,以获得更好的交付性能和更低的网络资源占用。内容分发网络(CDN)、移动边缘云(mec)和雾计算中的缓存机制已经针对不同的应用场景实现了边缘缓存范式。然而,文献中最先进的缓存机制大多受到应用程序场景的限制。随着异构网络的快速发展,缺乏统一的缓存管理已经成为一个问题。为此,本文提出了一种新的缓存机制——半边缘缓存(Semi-Edge caching)。SE缓存机制基于网络内缓存技术,可以通用于各种类型的网络雾。在SE机制下,提出了SE- u(单播)和SE- b(广播)两种内容分配策略。SE-U和SE-B的性能在三种典型拓扑和不同的场景环境中进行了评估。与边缘缓存相比,SE可以减少7%的延迟,提高45%的缓存命中率。
{"title":"Semi-Edge: From Edge Caching to Hierarchical Caching in Network Fog","authors":"Yining Hua, L. Guan, K. Kyriakopoulos","doi":"10.1145/3213344.3213352","DOIUrl":"https://doi.org/10.1145/3213344.3213352","url":null,"abstract":"In recent content delivery mechanisms, popular contents tend to be placed closer to the users for better delivery performance and lower network resource occupation. Caching mechanisms in Content Delivery Networks (CDN), Mobile Edge Clouds (MECs) and fog computing have implemented edge caching paradigm for different application scenarios. However, state-of-the-art caching mechanisms in literature are mostly bounded by application scenarios. With the rapid development of heterogeneous networks, the lack of uniform caching management has become an issue. Therefore, a novel caching mechanism, Semi-Edge caching (SE), is proposed in this paper. SE caching mechanism is based on in-network caching technique and it could be generically applied into various types of network fog. Furthermore, two content allocation strategies, SE-U (unicast) and SE-B (broadcast), are proposed within SE mechanism. The performance of SE-U and SE-B are evaluated in three typical topologies with various scenario contexts. Compared to edge caching, SE can reduce latency by 7% and increase cache hit ratio by 45%.","PeriodicalId":433649,"journal":{"name":"Proceedings of the 1st International Workshop on Edge Systems, Analytics and Networking","volume":"2 3","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114121423","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Proceedings of the 1st International Workshop on Edge Systems, Analytics and Networking 第一届边缘系统、分析与网络国际研讨会论文集
{"title":"Proceedings of the 1st International Workshop on Edge Systems, Analytics and Networking","authors":"","doi":"10.1145/3213344","DOIUrl":"https://doi.org/10.1145/3213344","url":null,"abstract":"","PeriodicalId":433649,"journal":{"name":"Proceedings of the 1st International Workshop on Edge Systems, Analytics and Networking","volume":"79 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116408726","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Enabling Edge Devices that Learn from Each Other: Cross Modal Training for Activity Recognition 启用相互学习的边缘设备:活动识别的跨模态训练
Tianwei Xing, S. Sandha, Bharathan Balaji, Supriyo Chakraborty, M. Srivastava
Edge devices rely extensively on machine learning for intelligent inferences and pattern matching. However, edge devices use a multitude of sensing modalities and are exposed to wide ranging contexts. It is difficult to develop separate machine learning models for each scenario as manual labeling is not scalable. To reduce the amount of labeled data and to speed up the training process, we propose to transfer knowledge between edge devices by using unlabeled data. Our approach, called RecycleML, uses cross modal transfer to accelerate the learning of edge devices across different sensing modalities. Using human activity recognition as a case study, over our collected CMActivity dataset, we observe that RecycleML reduces the amount of required labeled data by at least 90% and speeds up the training process by up to 50 times in comparison to training the edge device from scratch.
边缘设备广泛依赖于机器学习来进行智能推理和模式匹配。然而,边缘设备使用多种传感模式,并暴露在广泛的环境中。很难为每个场景开发单独的机器学习模型,因为手动标记是不可扩展的。为了减少标记数据的数量并加快训练过程,我们建议使用未标记的数据在边缘设备之间传递知识。我们的方法,称为RecycleML,使用跨模式传输来加速边缘设备跨不同传感模式的学习。使用人类活动识别作为案例研究,在我们收集的CMActivity数据集上,我们观察到,与从头开始训练边缘设备相比,RecycleML将所需的标记数据量减少了至少90%,并将训练过程加快了50倍。
{"title":"Enabling Edge Devices that Learn from Each Other: Cross Modal Training for Activity Recognition","authors":"Tianwei Xing, S. Sandha, Bharathan Balaji, Supriyo Chakraborty, M. Srivastava","doi":"10.1145/3213344.3213351","DOIUrl":"https://doi.org/10.1145/3213344.3213351","url":null,"abstract":"Edge devices rely extensively on machine learning for intelligent inferences and pattern matching. However, edge devices use a multitude of sensing modalities and are exposed to wide ranging contexts. It is difficult to develop separate machine learning models for each scenario as manual labeling is not scalable. To reduce the amount of labeled data and to speed up the training process, we propose to transfer knowledge between edge devices by using unlabeled data. Our approach, called RecycleML, uses cross modal transfer to accelerate the learning of edge devices across different sensing modalities. Using human activity recognition as a case study, over our collected CMActivity dataset, we observe that RecycleML reduces the amount of required labeled data by at least 90% and speeds up the training process by up to 50 times in comparison to training the edge device from scratch.","PeriodicalId":433649,"journal":{"name":"Proceedings of the 1st International Workshop on Edge Systems, Analytics and Networking","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125411942","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 30
A Multi-Cloudlet Infrastructure for Future Smart Cities: An Empirical Study 面向未来智慧城市的多云基础设施:实证研究
Julien Gedeon, Jeff Krisztinkovics, Christian Meurisch, Michael Stein, L. Wang, M. Mühlhäuser
The emerging paradigm of edge computing has proposed cloudlets to offload data and computations from mobile, resource-constrained devices. However, little attention has been paid to the question on where to deploy cloudlets in the context of smart city environments. In this vision paper, we propose to deploy cloudlets on a city-wide scale by leveraging three kinds of existing infrastructures: cellular base stations, routers and street lamps. We motivate the use of this infrastructure with real location data of nearly 50,000 access points from a major city. We provide an analysis on the potential coverage for the different cloudlet types. Besides spatial coverage, we also consider user traces from two mobile applications. Our results show that upgrading only a relatively small number of access points can lead to a city-scale cloudlet coverage. This is especially true for the coverage analysis of the mobility traces, where mobile users are within the communication range of a cloudlet-enabled access point most of the time.
新兴的边缘计算范式提出了云计算,以从移动、资源受限的设备中卸载数据和计算。然而,在智慧城市环境中,在哪里部署云计算的问题却很少受到关注。在这份愿景报告中,我们建议通过利用三种现有基础设施:蜂窝基站、路由器和路灯,在全市范围内部署云计算。我们利用来自一个主要城市的近5万个接入点的真实位置数据来激励这种基础设施的使用。我们对不同云类型的潜在覆盖范围进行了分析。除了空间覆盖,我们还考虑了来自两个移动应用程序的用户痕迹。我们的研究结果表明,仅升级相对较少的接入点就可以实现城市规模的云覆盖。对于移动性跟踪的覆盖分析尤其如此,其中移动用户大部分时间都在启用云的接入点的通信范围内。
{"title":"A Multi-Cloudlet Infrastructure for Future Smart Cities: An Empirical Study","authors":"Julien Gedeon, Jeff Krisztinkovics, Christian Meurisch, Michael Stein, L. Wang, M. Mühlhäuser","doi":"10.1145/3213344.3213348","DOIUrl":"https://doi.org/10.1145/3213344.3213348","url":null,"abstract":"The emerging paradigm of edge computing has proposed cloudlets to offload data and computations from mobile, resource-constrained devices. However, little attention has been paid to the question on where to deploy cloudlets in the context of smart city environments. In this vision paper, we propose to deploy cloudlets on a city-wide scale by leveraging three kinds of existing infrastructures: cellular base stations, routers and street lamps. We motivate the use of this infrastructure with real location data of nearly 50,000 access points from a major city. We provide an analysis on the potential coverage for the different cloudlet types. Besides spatial coverage, we also consider user traces from two mobile applications. Our results show that upgrading only a relatively small number of access points can lead to a city-scale cloudlet coverage. This is especially true for the coverage analysis of the mobility traces, where mobile users are within the communication range of a cloudlet-enabled access point most of the time.","PeriodicalId":433649,"journal":{"name":"Proceedings of the 1st International Workshop on Edge Systems, Analytics and Networking","volume":"271 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116175214","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
The Web as a Distributed Computing Platform Web作为分布式计算平台
N. Vasilakis, Pranjal Goel, Henri Maxime Demoulin, Jonathan M. Smith
Perceived as a vast, interconnected graph of content, the reality of the web is very different. Immense computational resources are used to deliver this content and associated services. An even larger pool of computing power is comprised by edge user devices. This latent potential has gone unused. Ar~frames the web as a distributed computing platform, unifying processing and storage infrastructure with a core programming model and a common set of browser-provided services. By exposing the inherent capacities to programmers, a far more powerful capability has been unleashed, that of the Internet as a distributed computing system. We have implemented a prototype system that, while modest in scale, fully illustrates what can be realized.
作为一个巨大的、相互关联的内容图表,网络的现实是非常不同的。大量的计算资源被用于交付这些内容和相关服务。更大的计算能力池由边缘用户设备组成。这种潜在的潜力没有得到利用。Ar~将web构建为一个分布式计算平台,将处理和存储基础设施与核心编程模型和一组通用的浏览器提供的服务统一起来。通过向程序员展示其固有的能力,一种更强大的能力被释放出来,即作为分布式计算系统的互联网。我们已经实现了一个原型系统,虽然规模不大,但它充分说明了可以实现的内容。
{"title":"The Web as a Distributed Computing Platform","authors":"N. Vasilakis, Pranjal Goel, Henri Maxime Demoulin, Jonathan M. Smith","doi":"10.1145/3213344.3213346","DOIUrl":"https://doi.org/10.1145/3213344.3213346","url":null,"abstract":"Perceived as a vast, interconnected graph of content, the reality of the web is very different. Immense computational resources are used to deliver this content and associated services. An even larger pool of computing power is comprised by edge user devices. This latent potential has gone unused. Ar~frames the web as a distributed computing platform, unifying processing and storage infrastructure with a core programming model and a common set of browser-provided services. By exposing the inherent capacities to programmers, a far more powerful capability has been unleashed, that of the Internet as a distributed computing system. We have implemented a prototype system that, while modest in scale, fully illustrates what can be realized.","PeriodicalId":433649,"journal":{"name":"Proceedings of the 1st International Workshop on Edge Systems, Analytics and Networking","volume":"98 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134379494","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Profit-aware Resource Management for Edge Computing Systems 边缘计算系统的利润感知资源管理
C. Anglano, M. Canonico, Marco Guazzone
Edge Computing (EC) represents the most promising solution to the real-time or near-real-time processing needs of the data generated by Internet of Things devices. The emergence of Edge Infrastructure Providers (EIPs) will bring the EC benefits to those enterprises that cannot afford to purchase, deploy, and manage their own edge infrastructures. The main goal of EIPs will be that of max-imizing their profit, i.e. the difference of the revenues they make to host applications, and the cost they incur to run the infrastructure plus the penalty they have to pay when QoS requirements of hosted applications are not met. To maximize profit, an EIP must strike a balance between the above two factors. In this paper we present the Online Profit Maximization (OPM) algorithm, an approximation algorithm that aims at increasing the profit of an EIP without a priori knowledge. We assess the performance of OPM by simulating its behavior for a variety of realistic scenarios, in which data are generated by a population of moving users, and by comparing the results it yields against those attained by an oracle (i.e., an unrealistic algorithm able to always make optimal decisions) and by a state-of-the-art alternative. Our results indicate that OPM is able to achieve results that are always within 1% of the optimal ones, and that always outperforms the alternative solution.
边缘计算(EC)代表了最有希望的解决方案,以满足物联网设备产生的数据的实时或近实时处理需求。边缘基础设施提供商(eip)的出现将为那些无力购买、部署和管理自己的边缘基础设施的企业带来EC优势。eip的主要目标将是最大化他们的利润,即他们为托管应用程序赚取的收入与运行基础设施所产生的成本之间的差额,以及当托管应用程序的QoS要求未得到满足时他们必须支付的罚款。为了实现利润最大化,EIP必须在上述两个因素之间取得平衡。本文提出了在线利润最大化(OPM)算法,这是一种在没有先验知识的情况下提高EIP利润的近似算法。我们通过模拟OPM在各种现实场景中的行为来评估其性能,其中数据是由一群移动的用户生成的,并通过将其产生的结果与oracle(即,能够始终做出最佳决策的不切实际的算法)和最先进的替代方案所获得的结果进行比较。我们的结果表明,OPM能够获得的结果总是在最优结果的1%以内,并且总是优于替代解决方案。
{"title":"Profit-aware Resource Management for Edge Computing Systems","authors":"C. Anglano, M. Canonico, Marco Guazzone","doi":"10.1145/3213344.3213349","DOIUrl":"https://doi.org/10.1145/3213344.3213349","url":null,"abstract":"Edge Computing (EC) represents the most promising solution to the real-time or near-real-time processing needs of the data generated by Internet of Things devices. The emergence of Edge Infrastructure Providers (EIPs) will bring the EC benefits to those enterprises that cannot afford to purchase, deploy, and manage their own edge infrastructures. The main goal of EIPs will be that of max-imizing their profit, i.e. the difference of the revenues they make to host applications, and the cost they incur to run the infrastructure plus the penalty they have to pay when QoS requirements of hosted applications are not met. To maximize profit, an EIP must strike a balance between the above two factors. In this paper we present the Online Profit Maximization (OPM) algorithm, an approximation algorithm that aims at increasing the profit of an EIP without a priori knowledge. We assess the performance of OPM by simulating its behavior for a variety of realistic scenarios, in which data are generated by a population of moving users, and by comparing the results it yields against those attained by an oracle (i.e., an unrealistic algorithm able to always make optimal decisions) and by a state-of-the-art alternative. Our results indicate that OPM is able to achieve results that are always within 1% of the optimal ones, and that always outperforms the alternative solution.","PeriodicalId":433649,"journal":{"name":"Proceedings of the 1st International Workshop on Edge Systems, Analytics and Networking","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124011039","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 26
Sizing Buffers of IoT Edge Routers 物联网边缘路由器的缓冲区大小
J. A. Khan, Muhammad Shahzad, A. Butt
In typical IoT systems, sensors and actuators are connected to small embedded computers, called IoT devices, and the IoT devices are connected to one or more appropriate cloud services over the internet through an edge access router. A very important design aspect of an IoT edge router is the size of the output packet buffer of its interface that connects to the access link. Selecting an appropriate size for this buffer is crucial because it directly impacts two key performance metrics: 1) access link utilization and 2) latency. In this paper, we calculate the size of the output buffer that ensures that the access link stays highly utilized and at the same time, significantly lowers the average latency experienced by the packets. To calculate this buffer size, we theoretically model the average TCP congestion window size of all IoT devices while eliminating three key assumptions of prior art that do not hold true for IoT TCP traffic, as we will demonstrate through a measurement study. We show that for IoT traffic, buffer size calculated by our method results in 50% lower queuing delay compared to the state of the art schemes while achieving similar access link utilization and loss-rate.
在典型的物联网系统中,传感器和执行器连接到称为物联网设备的小型嵌入式计算机,物联网设备通过边缘接入路由器通过互联网连接到一个或多个适当的云服务。物联网边缘路由器的一个非常重要的设计方面是其连接到访问链路的接口的输出数据包缓冲区的大小。为这个缓冲区选择合适的大小是至关重要的,因为它直接影响两个关键的性能指标:1)访问链路利用率和2)延迟。在本文中,我们计算了输出缓冲区的大小,以确保访问链路保持高度利用,同时显着降低数据包经历的平均延迟。为了计算这个缓冲区大小,我们从理论上模拟了所有物联网设备的平均TCP拥塞窗口大小,同时消除了现有技术中不适用于物联网TCP流量的三个关键假设,正如我们将通过测量研究证明的那样。我们表明,对于物联网流量,与最先进的方案相比,通过我们的方法计算的缓冲区大小可使排队延迟降低50%,同时实现类似的访问链路利用率和损失率。
{"title":"Sizing Buffers of IoT Edge Routers","authors":"J. A. Khan, Muhammad Shahzad, A. Butt","doi":"10.1145/3213344.3213354","DOIUrl":"https://doi.org/10.1145/3213344.3213354","url":null,"abstract":"In typical IoT systems, sensors and actuators are connected to small embedded computers, called IoT devices, and the IoT devices are connected to one or more appropriate cloud services over the internet through an edge access router. A very important design aspect of an IoT edge router is the size of the output packet buffer of its interface that connects to the access link. Selecting an appropriate size for this buffer is crucial because it directly impacts two key performance metrics: 1) access link utilization and 2) latency. In this paper, we calculate the size of the output buffer that ensures that the access link stays highly utilized and at the same time, significantly lowers the average latency experienced by the packets. To calculate this buffer size, we theoretically model the average TCP congestion window size of all IoT devices while eliminating three key assumptions of prior art that do not hold true for IoT TCP traffic, as we will demonstrate through a measurement study. We show that for IoT traffic, buffer size calculated by our method results in 50% lower queuing delay compared to the state of the art schemes while achieving similar access link utilization and loss-rate.","PeriodicalId":433649,"journal":{"name":"Proceedings of the 1st International Workshop on Edge Systems, Analytics and Networking","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129473252","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Enabling GPU-assisted Antivirus Protection on Android Devices through Edge Offloading 通过边缘卸载在Android设备上启用gpu辅助防病毒保护
Dimitris Deyannis, Rafail Tsirbas, G. Vasiliadis, R. Montella, Sokol Kosta, S. Ioannidis
Antivirus software are the most popular tools for detecting and stopping malicious or unwanted files. However, the performance requirements of traditional host-based antivirus make their wide adoption to mobile, embedded, and hand-held devices questionable. Their computational- and memory-intensive characteristics, which are needed to cope with the evolved and sophisticated malware, makes their deployment to mobile processors a hard task. Moreover, their increasing complexity may result in vulnerabilities that can be exploited by malware. In this paper, we first describe a GPU-based antivirus algorithm for Android devices. Then, due to the limited number of GPU-enabled Android devices, we present different architecture designs that exploit code offloading for running the antivirus on more powerful machines. This approach enables lower execution and memory overheads, better performance, and improved deployability and management. We evaluate the performance, scalability, and efficacy of the system in several different scenarios and setups. We show that the time to detect a malware is 8.4 times lower than the typical local execution approach.
防病毒软件是检测和阻止恶意或不需要的文件的最流行的工具。然而,传统的基于主机的防病毒技术的性能要求使其在移动、嵌入式和手持设备上的广泛应用受到质疑。它们需要大量的计算和内存来应对不断进化和复杂的恶意软件,这使得将它们部署到移动处理器上成为一项艰巨的任务。此外,它们日益增加的复杂性可能导致被恶意软件利用的漏洞。本文首先描述了一种基于gpu的Android设备防病毒算法。然后,由于支持gpu的Android设备数量有限,我们提出了不同的架构设计,利用代码卸载在更强大的机器上运行防病毒。这种方法支持更低的执行和内存开销、更好的性能以及改进的可部署性和管理。我们在几个不同的场景和设置中评估了系统的性能、可伸缩性和效率。我们表明,检测恶意软件的时间比典型的本地执行方法低8.4倍。
{"title":"Enabling GPU-assisted Antivirus Protection on Android Devices through Edge Offloading","authors":"Dimitris Deyannis, Rafail Tsirbas, G. Vasiliadis, R. Montella, Sokol Kosta, S. Ioannidis","doi":"10.1145/3213344.3213347","DOIUrl":"https://doi.org/10.1145/3213344.3213347","url":null,"abstract":"Antivirus software are the most popular tools for detecting and stopping malicious or unwanted files. However, the performance requirements of traditional host-based antivirus make their wide adoption to mobile, embedded, and hand-held devices questionable. Their computational- and memory-intensive characteristics, which are needed to cope with the evolved and sophisticated malware, makes their deployment to mobile processors a hard task. Moreover, their increasing complexity may result in vulnerabilities that can be exploited by malware. In this paper, we first describe a GPU-based antivirus algorithm for Android devices. Then, due to the limited number of GPU-enabled Android devices, we present different architecture designs that exploit code offloading for running the antivirus on more powerful machines. This approach enables lower execution and memory overheads, better performance, and improved deployability and management. We evaluate the performance, scalability, and efficacy of the system in several different scenarios and setups. We show that the time to detect a malware is 8.4 times lower than the typical local execution approach.","PeriodicalId":433649,"journal":{"name":"Proceedings of the 1st International Workshop on Edge Systems, Analytics and Networking","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126083938","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
Voice enabling mobile applications with UIVoice 通过UIVoice实现移动应用的语音功能
Ahmad Bisher Tarakji, Jian Xu, Juan A. Colmenares, Iqbal Mohomed
Improvements in cloud-based speech recognition have led to an explosion in voice assistants, as bespoke devices in the home, cars, wearables or on smart phones. In this paper, we present UIVoice, through which we enable voice assistants (that heavily utilize the cloud) to dynamically interact with mobile applications running in the edge. We present a framework that can be used by third party developers to easily create Voice User Interfaces (VUIs) on top of existing applications. We demonstrate the feasibility of our approach through a prototype based on Android and Amazon Alexa, describe how we added voice to several popular applications and provide an initial performance evaluation. We also highlight research challenges that are relevant to the edge computing community.
基于云的语音识别技术的进步导致了语音助手的爆炸式增长,作为家庭、汽车、可穿戴设备或智能手机上的定制设备。在本文中,我们介绍了UIVoice,通过它,我们使语音助手(大量利用云)能够与在边缘运行的移动应用程序动态交互。我们提供了一个框架,第三方开发人员可以使用它在现有应用程序之上轻松创建语音用户界面(VUIs)。我们通过基于Android和亚马逊Alexa的原型展示了我们方法的可行性,描述了我们如何将语音添加到几个流行的应用程序中,并提供了初步的性能评估。我们还强调了与边缘计算社区相关的研究挑战。
{"title":"Voice enabling mobile applications with UIVoice","authors":"Ahmad Bisher Tarakji, Jian Xu, Juan A. Colmenares, Iqbal Mohomed","doi":"10.1145/3213344.3213353","DOIUrl":"https://doi.org/10.1145/3213344.3213353","url":null,"abstract":"Improvements in cloud-based speech recognition have led to an explosion in voice assistants, as bespoke devices in the home, cars, wearables or on smart phones. In this paper, we present UIVoice, through which we enable voice assistants (that heavily utilize the cloud) to dynamically interact with mobile applications running in the edge. We present a framework that can be used by third party developers to easily create Voice User Interfaces (VUIs) on top of existing applications. We demonstrate the feasibility of our approach through a prototype based on Android and Amazon Alexa, describe how we added voice to several popular applications and provide an initial performance evaluation. We also highlight research challenges that are relevant to the edge computing community.","PeriodicalId":433649,"journal":{"name":"Proceedings of the 1st International Workshop on Edge Systems, Analytics and Networking","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121720376","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
期刊
Proceedings of the 1st International Workshop on Edge Systems, Analytics and Networking
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1