首页 > 最新文献

2020 IEEE/ACM Symposium on Edge Computing (SEC)最新文献

英文 中文
Towards Context-aware Distributed Learning for CNN in Mobile Applications 面向移动应用中CNN的上下文感知分布式学习
Pub Date : 2020-11-01 DOI: 10.1109/SEC50012.2020.00045
Zhuwei Qin, Hao Jiang
Intelligent mobile applications have been ubiquitous on mobile devices. These applications keep collecting new and sensitive data from different users while being expected to have the ability to continually adapt the embedded machine learning model to these newly collected data. To improve the quality of service while protecting users’ privacy, distributed mobile learning (e.g., Federated Learning (FedAvg) [1]) has been proposed to offload model training from the cloud to the mobile devices, which enables multiple devices collaboratively train a shared model without leaking the data to the cloud. However, this design becomes impracticable when training the machine learning model (e.g., Convolutional Neural Network (CNN)) on mobile devices with diverse application context. For example, in conventional distributed training schemes, different devices are assumed to have integrated training datasets and train identical CNN model structures. Distributed collaboration between devices is implemented by a straightforward weight average of each identical local models. While, in mobile image classification tasks, different mobile applications have dedicated classification targets depending on individual users’ preference and application specificity. Therefore, directly averaging the model weight of each local model will result in a significant reduction of the test accuracy. To solve this problem, we proposed CAD: a context-aware distributed learning framework for mobile applications, where each mobile device is deployed with a context-adaptive submodel structure instead of the entire global model structure.
智能移动应用程序在移动设备上无处不在。这些应用程序不断从不同用户那里收集新的敏感数据,同时期望能够不断地使嵌入式机器学习模型适应这些新收集的数据。为了在保护用户隐私的同时提高服务质量,分布式移动学习(例如Federated learning (FedAvg)[1])被提出将模型训练从云端卸载到移动设备上,这使得多个设备能够协同训练共享模型,而不会将数据泄露到云端。然而,当在具有不同应用环境的移动设备上训练机器学习模型(例如卷积神经网络(CNN))时,这种设计变得不切实际。例如,在传统的分布式训练方案中,假设不同的设备具有集成的训练数据集,并且训练相同的CNN模型结构。设备之间的分布式协作是通过每个相同本地模型的直接加权平均来实现的。而在移动图像分类任务中,不同的移动应用根据个人用户的偏好和应用的特殊性有专门的分类目标。因此,直接平均每个局部模型的模型权重将导致测试精度的显著降低。为了解决这个问题,我们提出了CAD:一个用于移动应用程序的上下文感知分布式学习框架,其中每个移动设备都部署了一个上下文自适应的子模型结构,而不是整个全局模型结构。
{"title":"Towards Context-aware Distributed Learning for CNN in Mobile Applications","authors":"Zhuwei Qin, Hao Jiang","doi":"10.1109/SEC50012.2020.00045","DOIUrl":"https://doi.org/10.1109/SEC50012.2020.00045","url":null,"abstract":"Intelligent mobile applications have been ubiquitous on mobile devices. These applications keep collecting new and sensitive data from different users while being expected to have the ability to continually adapt the embedded machine learning model to these newly collected data. To improve the quality of service while protecting users’ privacy, distributed mobile learning (e.g., Federated Learning (FedAvg) [1]) has been proposed to offload model training from the cloud to the mobile devices, which enables multiple devices collaboratively train a shared model without leaking the data to the cloud. However, this design becomes impracticable when training the machine learning model (e.g., Convolutional Neural Network (CNN)) on mobile devices with diverse application context. For example, in conventional distributed training schemes, different devices are assumed to have integrated training datasets and train identical CNN model structures. Distributed collaboration between devices is implemented by a straightforward weight average of each identical local models. While, in mobile image classification tasks, different mobile applications have dedicated classification targets depending on individual users’ preference and application specificity. Therefore, directly averaging the model weight of each local model will result in a significant reduction of the test accuracy. To solve this problem, we proposed CAD: a context-aware distributed learning framework for mobile applications, where each mobile device is deployed with a context-adaptive submodel structure instead of the entire global model structure.","PeriodicalId":375577,"journal":{"name":"2020 IEEE/ACM Symposium on Edge Computing (SEC)","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127692853","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Towards Online Learning and Concept Drift for Offloading Complex Event Processing in the Edge 边缘复杂事件处理的在线学习与概念漂移研究
Pub Date : 2020-11-01 DOI: 10.1109/SEC50012.2020.00024
João Alexandre Neto, Jorge C. B. Fonseca, Kiev Gama
Edge computing has enabled the usage of Complex Event Processing (CEP) closer to data sources, delivering on time response to critical applications. One of the challenges in this context is how to support this processing and keep an optimal resource usage (e.g., Memory, CPU). State-of-art solutions have suggested computational offloading techniques to distribute processing across the nodes and reach such optimization. Most of them take the offloading decision through predefined policies or adaptive solutions with the usage of machine learning algorithms. However, these techniques are not able to incrementally learn without any historical data or to adapt to changes on statistical data properties. This research aims to use online learning and concept drift detection on offloading decision to optimize resource usage and keep the learning model up-to-date. The feasibility of our approach was noticed through preliminary evaluations.
边缘计算使复杂事件处理(CEP)的使用更接近数据源,为关键应用程序提供及时的响应。在这种情况下的挑战之一是如何支持这种处理并保持最佳的资源使用(例如,内存、CPU)。最先进的解决方案建议使用计算卸载技术来跨节点分配处理并达到这种优化。它们大多通过预定义的策略或使用机器学习算法的自适应解决方案来进行卸载决策。然而,这些技术不能在没有任何历史数据的情况下进行增量学习,也不能适应统计数据属性的变化。本研究旨在利用在线学习和概念漂移检测进行卸载决策,以优化资源使用并使学习模型保持最新。通过初步评估,我们的方法是可行的。
{"title":"Towards Online Learning and Concept Drift for Offloading Complex Event Processing in the Edge","authors":"João Alexandre Neto, Jorge C. B. Fonseca, Kiev Gama","doi":"10.1109/SEC50012.2020.00024","DOIUrl":"https://doi.org/10.1109/SEC50012.2020.00024","url":null,"abstract":"Edge computing has enabled the usage of Complex Event Processing (CEP) closer to data sources, delivering on time response to critical applications. One of the challenges in this context is how to support this processing and keep an optimal resource usage (e.g., Memory, CPU). State-of-art solutions have suggested computational offloading techniques to distribute processing across the nodes and reach such optimization. Most of them take the offloading decision through predefined policies or adaptive solutions with the usage of machine learning algorithms. However, these techniques are not able to incrementally learn without any historical data or to adapt to changes on statistical data properties. This research aims to use online learning and concept drift detection on offloading decision to optimize resource usage and keep the learning model up-to-date. The feasibility of our approach was noticed through preliminary evaluations.","PeriodicalId":375577,"journal":{"name":"2020 IEEE/ACM Symposium on Edge Computing (SEC)","volume":"36 6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132866924","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Poster: An Assessment Framework for Edge Applications 海报:边缘应用的评估框架
Pub Date : 2020-11-01 DOI: 10.1109/SEC50012.2020.00030
Martin Wagner, Julien Gedeon, Karolis Skaisgiris, Florian Brandherm, M. Mühlhäuser
We introduce an assessment framework for edge computing applications. The framework allows developers to measure the execution time of their applications in different environments and generate a model for the prediction of execution times. Based on these measurements and predictions, better informed management decisions can be made for edge applications.
我们介绍了一个边缘计算应用的评估框架。该框架允许开发人员测量其应用程序在不同环境中的执行时间,并生成用于预测执行时间的模型。基于这些测量和预测,可以为边缘应用程序做出更明智的管理决策。
{"title":"Poster: An Assessment Framework for Edge Applications","authors":"Martin Wagner, Julien Gedeon, Karolis Skaisgiris, Florian Brandherm, M. Mühlhäuser","doi":"10.1109/SEC50012.2020.00030","DOIUrl":"https://doi.org/10.1109/SEC50012.2020.00030","url":null,"abstract":"We introduce an assessment framework for edge computing applications. The framework allows developers to measure the execution time of their applications in different environments and generate a model for the prediction of execution times. Based on these measurements and predictions, better informed management decisions can be made for edge applications.","PeriodicalId":375577,"journal":{"name":"2020 IEEE/ACM Symposium on Edge Computing (SEC)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131447307","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Proactive Microservice Placement and Migration for Mobile Edge Computing 移动边缘计算的主动微服务放置和迁移
Pub Date : 2020-11-01 DOI: 10.1109/SEC50012.2020.00010
Kaustabha Ray, A. Banerjee, N. Narendra
In recent times, Mobile Edge Computing (MEC) has emerged as a new paradigm allowing low-latency access to services deployed on edge nodes offering computation, storage and communication facilities. Vendors deploy their services on MEC servers to improve performance and mitigate network latencies often encountered in accessing cloud services. A service placement policy determines which services are deployed on which MEC servers. A number of mechanisms exist in literature to determine the optimal placement of services considering different performance metrics. However, for applications designed as microservice workflow architectures, service placement schemes need to be re-examined through a different lens owing to the inherent interdependencies which exist between microservices. Indeed, the dynamic environment, with stochastic user movement and service invocations, along with a large placement configuration space makes microservice placement in MEC a challenging task. Additionally, owing to user mobility, a placement scheme may need to be recalibrated, triggering service migrations to maintain the advantages offered by MEC. Existing microservice placement and migration schemes consider on-demand strategies. In this work, we take a different route and propose a Reinforcement Learning based proactive mechanism for microservice placement and migration. We use the San Francisco Taxi dataset to validate our approach. Experimental results show the effectiveness of our approach in comparison to other state-of-the-art methods.
最近,移动边缘计算(MEC)已经成为一种新的范例,允许低延迟访问部署在提供计算、存储和通信设施的边缘节点上的服务。供应商将他们的服务部署在MEC服务器上,以提高性能并减轻访问云服务时经常遇到的网络延迟。服务放置策略确定将哪些服务部署在哪些MEC服务器上。文献中存在许多机制来确定考虑不同性能指标的服务的最佳放置。然而,对于设计为微服务工作流架构的应用程序,由于微服务之间存在固有的相互依赖关系,需要通过不同的视角重新检查服务放置方案。实际上,动态环境,随机用户移动和服务调用,以及大的放置配置空间使得微服务在MEC中的放置成为一项具有挑战性的任务。此外,由于用户的移动性,安置方案可能需要重新校准,从而触发服务迁移,以保持MEC提供的优势。现有的微服务放置和迁移方案考虑了按需策略。在这项工作中,我们采取了不同的路线,并提出了一种基于强化学习的微服务放置和迁移的主动机制。我们使用旧金山出租车数据集来验证我们的方法。实验结果表明,与其他先进的方法相比,我们的方法是有效的。
{"title":"Proactive Microservice Placement and Migration for Mobile Edge Computing","authors":"Kaustabha Ray, A. Banerjee, N. Narendra","doi":"10.1109/SEC50012.2020.00010","DOIUrl":"https://doi.org/10.1109/SEC50012.2020.00010","url":null,"abstract":"In recent times, Mobile Edge Computing (MEC) has emerged as a new paradigm allowing low-latency access to services deployed on edge nodes offering computation, storage and communication facilities. Vendors deploy their services on MEC servers to improve performance and mitigate network latencies often encountered in accessing cloud services. A service placement policy determines which services are deployed on which MEC servers. A number of mechanisms exist in literature to determine the optimal placement of services considering different performance metrics. However, for applications designed as microservice workflow architectures, service placement schemes need to be re-examined through a different lens owing to the inherent interdependencies which exist between microservices. Indeed, the dynamic environment, with stochastic user movement and service invocations, along with a large placement configuration space makes microservice placement in MEC a challenging task. Additionally, owing to user mobility, a placement scheme may need to be recalibrated, triggering service migrations to maintain the advantages offered by MEC. Existing microservice placement and migration schemes consider on-demand strategies. In this work, we take a different route and propose a Reinforcement Learning based proactive mechanism for microservice placement and migration. We use the San Francisco Taxi dataset to validate our approach. Experimental results show the effectiveness of our approach in comparison to other state-of-the-art methods.","PeriodicalId":375577,"journal":{"name":"2020 IEEE/ACM Symposium on Edge Computing (SEC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129970244","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 19
BEAF: A Blockchain and Edge Assistant Framework with Data Sharing for IoT Networks BEAF:具有物联网网络数据共享的区块链和边缘助理框架
Pub Date : 2020-11-01 DOI: 10.1109/SEC50012.2020.00054
Chanying Huang, Yingxun Hu
Edge computing is emerging as an innovative technology that brings data processing and storage to end users, further leading to scale, decentralization and safety to IoT netWorks. It improves the quality of services but meanwhile introduces great challenges such as data security, latency, etc. Fortunately, blockchain technology can improve security issues of edge computing in IoT networks as it alloWs only trusted IoT nodes to interact With each other. To better address the security risks of data sharing problem and improve the credibility of data, this paper presents a Blockchain and Edge computing Assistant security FrameWork (BEAF) With data sharing for IoT netWorks. With blockchain and edge computing, BEAF supports both decentralization and data tracing after data sharing. In addition, BEAF can achieve access control and share data to specific node. We also conduct security analysis and shoiv that BEAF provides confidentiality, availability, data integrity, etc. In addition, We develop the base layer through Hyperledger Fabric technology, and evaluate the performance of BEAF in terms of stability, scalability and efficiency.
边缘计算作为一种创新技术正在兴起,它将数据处理和存储带给最终用户,进一步实现物联网网络的规模化、分散化和安全性。它提高了服务质量,但同时也带来了数据安全、延迟等巨大挑战。幸运的是,区块链技术可以改善物联网网络中边缘计算的安全问题,因为它只允许可信的物联网节点相互交互。为了更好地解决数据共享问题的安全风险,提高数据的可信度,本文提出了一种具有数据共享的区块链和边缘计算助手安全框架(BEAF)。通过区块链和边缘计算,BEAF支持数据共享后的去中心化和数据跟踪。此外,BEAF还可以实现访问控制,实现对特定节点的数据共享。我们还进行安全分析,并确保BEAF提供机密性、可用性、数据完整性等。此外,我们通过Hyperledger Fabric技术开发了底层,并从稳定性、可扩展性和效率方面评估了BEAF的性能。
{"title":"BEAF: A Blockchain and Edge Assistant Framework with Data Sharing for IoT Networks","authors":"Chanying Huang, Yingxun Hu","doi":"10.1109/SEC50012.2020.00054","DOIUrl":"https://doi.org/10.1109/SEC50012.2020.00054","url":null,"abstract":"Edge computing is emerging as an innovative technology that brings data processing and storage to end users, further leading to scale, decentralization and safety to IoT netWorks. It improves the quality of services but meanwhile introduces great challenges such as data security, latency, etc. Fortunately, blockchain technology can improve security issues of edge computing in IoT networks as it alloWs only trusted IoT nodes to interact With each other. To better address the security risks of data sharing problem and improve the credibility of data, this paper presents a Blockchain and Edge computing Assistant security FrameWork (BEAF) With data sharing for IoT netWorks. With blockchain and edge computing, BEAF supports both decentralization and data tracing after data sharing. In addition, BEAF can achieve access control and share data to specific node. We also conduct security analysis and shoiv that BEAF provides confidentiality, availability, data integrity, etc. In addition, We develop the base layer through Hyperledger Fabric technology, and evaluate the performance of BEAF in terms of stability, scalability and efficiency.","PeriodicalId":375577,"journal":{"name":"2020 IEEE/ACM Symposium on Edge Computing (SEC)","volume":"9 1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125064615","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Quantized Reservoir Computing on Edge Devices for Communication Applications 通信应用边缘设备上的量子化储层计算
Pub Date : 2020-11-01 DOI: 10.1109/SEC50012.2020.00068
Shiya Liu, Lingjia Liu, Y. Yi
With the advance of edge computing, a fast and efficient machine learning model running on edge devices is needed. In this paper, we propose a novel quantization approach that reduces the memory and compute demands on edge devices without losing much accuracy. Also, we explore its application in communication such as symbol detection in 5G systems, attack detection of smart grid, and dynamic spectrum access. Conventional neural networks such as convolutional neural networks (CNNs) and recurrent neural networks (RNNs) could be exploited on these applications and achieve state-of-the-art performance. However, conventional neural networks consume a large amount of computation and storage resources, and thus do not fit well to edge devices. Reservoir computing (RC), which is a framework for computation derived from RNN, consists of a fixed reservoir layer and a trained readout layer. The advantages of RC compared to traditional RNNs are faster learning and lower training costs. Besides, RC has faster inference speed with fewer parameters and resistance to overfitting issues. These merits make the RC system more suitable for applications running on edge devices. We apply the proposed quantization approach to RC systems and demonstrate the proposed quantized RC system on Xilinx Zynq®-7000 FPGA board. On the sequential MNIST dataset, the quantized RC system utilizes 62%, 65%, and 64% less of DSP, FF, and LUT, respectively compared to the floating-point RNN. The inference speed is improved by 17 times with an 8% accuracy drop.
随着边缘计算的发展,需要一种运行在边缘设备上的快速高效的机器学习模型。在本文中,我们提出了一种新的量化方法,可以减少边缘设备的内存和计算需求,同时又不会损失太多精度。此外,我们还探讨了它在通信中的应用,如5G系统中的符号检测、智能电网的攻击检测和动态频谱接入。卷积神经网络(cnn)和循环神经网络(rnn)等传统神经网络可以在这些应用中得到利用,并实现最先进的性能。然而,传统的神经网络消耗大量的计算和存储资源,因此不适合边缘设备。储层计算(RC)是由RNN衍生而来的一种计算框架,它由一个固定的储层和一个训练好的读出层组成。与传统rnn相比,RC的优点是学习速度更快,训练成本更低。此外,RC具有更快的推理速度、更少的参数和抗过拟合问题。这些优点使RC系统更适合在边缘设备上运行的应用。我们将提出的量化方法应用于RC系统,并在Xilinx Zynq®-7000 FPGA板上演示了提出的量化RC系统。在序列MNIST数据集上,与浮点RNN相比,量化RC系统使用的DSP、FF和LUT分别减少了62%、65%和64%。推理速度提高了17倍,精度下降了8%。
{"title":"Quantized Reservoir Computing on Edge Devices for Communication Applications","authors":"Shiya Liu, Lingjia Liu, Y. Yi","doi":"10.1109/SEC50012.2020.00068","DOIUrl":"https://doi.org/10.1109/SEC50012.2020.00068","url":null,"abstract":"With the advance of edge computing, a fast and efficient machine learning model running on edge devices is needed. In this paper, we propose a novel quantization approach that reduces the memory and compute demands on edge devices without losing much accuracy. Also, we explore its application in communication such as symbol detection in 5G systems, attack detection of smart grid, and dynamic spectrum access. Conventional neural networks such as convolutional neural networks (CNNs) and recurrent neural networks (RNNs) could be exploited on these applications and achieve state-of-the-art performance. However, conventional neural networks consume a large amount of computation and storage resources, and thus do not fit well to edge devices. Reservoir computing (RC), which is a framework for computation derived from RNN, consists of a fixed reservoir layer and a trained readout layer. The advantages of RC compared to traditional RNNs are faster learning and lower training costs. Besides, RC has faster inference speed with fewer parameters and resistance to overfitting issues. These merits make the RC system more suitable for applications running on edge devices. We apply the proposed quantization approach to RC systems and demonstrate the proposed quantized RC system on Xilinx Zynq®-7000 FPGA board. On the sequential MNIST dataset, the quantized RC system utilizes 62%, 65%, and 64% less of DSP, FF, and LUT, respectively compared to the floating-point RNN. The inference speed is improved by 17 times with an 8% accuracy drop.","PeriodicalId":375577,"journal":{"name":"2020 IEEE/ACM Symposium on Edge Computing (SEC)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128055593","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
TAES: Two-factor Authentication with End-to-End Security against VoIP Phishing TAES:针对VoIP网络钓鱼的端到端安全的双因素身份验证
Pub Date : 2020-11-01 DOI: 10.1109/SEC50012.2020.00049
Dai Hou, Hao Han, Ed Novak
In the current state of communication technology, the abuse of VoIP has led to the emergence of telecommunications fraud. We urgently need an end-to-end identity authentication mechanism to verify the identity of the caller. This paper proposes an end-to-end, dual identity authentication mechanism to solve the problem of telecommunications fraud. Our first technique is to use the Hermes algorithm of data transmission technology on an unknown voice channel to transmit the certificate, thereby authenticating the caller’s phone number. Our second technique uses voice-print recognition technology and a Gaussian mixture model (a general background probabilistic model) to establish a model of the speaker to verify the caller’s voice to ensure the speaker’s identity. Our solution is implemented on the Android platform, and simultaneously tests and evaluates transmission efficiency and speaker recognition. Experiments conducted on Android phones show that the error rate of the voice channel transmission signature certificate is within 3.247 %, and the certificate signature verification mechanism is feasible. The accuracy of the voice-print recognition is 72%, making it effective as a reference for identity authentication.
在目前的通信技术状况下,VoIP的滥用导致了电信诈骗的出现。我们迫切需要一个端到端身份验证机制来验证调用者的身份。本文提出了一种端到端的双重身份认证机制来解决电信诈骗问题。我们的第一种技术是在未知的语音通道上使用数据传输技术的Hermes算法传输证书,从而验证呼叫者的电话号码。我们的第二种技术使用声纹识别技术和高斯混合模型(一般背景概率模型)建立说话人的模型来验证呼叫者的声音,以确保说话人的身份。我们的解决方案在Android平台上实现,同时测试和评估传输效率和说话人识别。在Android手机上进行的实验表明,语音通道传输签名证书的错误率在3.247%以内,该证书签名验证机制是可行的。声纹识别准确率达72%,可作为身份认证的参考。
{"title":"TAES: Two-factor Authentication with End-to-End Security against VoIP Phishing","authors":"Dai Hou, Hao Han, Ed Novak","doi":"10.1109/SEC50012.2020.00049","DOIUrl":"https://doi.org/10.1109/SEC50012.2020.00049","url":null,"abstract":"In the current state of communication technology, the abuse of VoIP has led to the emergence of telecommunications fraud. We urgently need an end-to-end identity authentication mechanism to verify the identity of the caller. This paper proposes an end-to-end, dual identity authentication mechanism to solve the problem of telecommunications fraud. Our first technique is to use the Hermes algorithm of data transmission technology on an unknown voice channel to transmit the certificate, thereby authenticating the caller’s phone number. Our second technique uses voice-print recognition technology and a Gaussian mixture model (a general background probabilistic model) to establish a model of the speaker to verify the caller’s voice to ensure the speaker’s identity. Our solution is implemented on the Android platform, and simultaneously tests and evaluates transmission efficiency and speaker recognition. Experiments conducted on Android phones show that the error rate of the voice channel transmission signature certificate is within 3.247 %, and the certificate signature verification mechanism is feasible. The accuracy of the voice-print recognition is 72%, making it effective as a reference for identity authentication.","PeriodicalId":375577,"journal":{"name":"2020 IEEE/ACM Symposium on Edge Computing (SEC)","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124867545","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
A Contextual Bi-armed Bandit Approach for MPTCP Path Management in Heterogeneous LTE and WiFi Edge Networks 异构LTE和WiFi边缘网络中MPTCP路径管理的上下文双臂强盗方法
Pub Date : 2020-11-01 DOI: 10.1109/SEC50012.2020.00042
A. Alzadjali, Flavio Esposito, J. Deogun
Multi-homed mobile devices are capable of aggregating traffic transmissions over heterogeneous networks. MultiPath TCP (MPTCP) is an evolution of TCP that allows the simultaneous use of multiple interfaces for a single connection. Despite the success of MPTCP, its deployment can be enhanced by controlling which network interface to be used as an initial path during the connectivity setup. In this paper, we proposed an online MPTCP path manager based on the contextual bandit algorithm to help choose the optimal primary path connection that maximizes throughput and minimizes delay and packet loss. The contextual bandit path manager deals with the rapid changes of multiple transmission paths in heterogeneous networks. The output of this algorithm introduces an adaptive policy to the path manager whenever the MPTCP connection is attempted based on the last hop wireless signals characteristics. Our experiments run over a real dataset of WiFi/LTE networks using NS3 implementation of MPTCP, enhanced to better support MPTCP path management control. We analyzed MPTCP’s throughput and latency metrics in various network conditions and found that the performance of the contextual bandit MPTCP path manager improved compared to the baselines used in our evaluation experiments. Utilizing edge computing technology, this model can be implemented in a mobile edge computing server to dodge MPTCP path management issues by communicating to the mobile equipment the best path for the given radio conditions. Our evaluation demonstrates that leveraging adaptive contextawareness improves the utilization of multiple network interfaces.
多归属移动设备能够在异构网络上聚合流量传输。多路径TCP (MPTCP)是TCP的一种演进,它允许为单个连接同时使用多个接口。尽管MPTCP取得了成功,但可以通过控制在连接设置期间使用哪个网络接口作为初始路径来增强其部署。在本文中,我们提出了一个基于上下文强盗算法的在线MPTCP路径管理器,以帮助选择最优的主路径连接,最大限度地提高吞吐量,最小化延迟和数据包丢失。上下文强盗路径管理器处理异构网络中多条传输路径的快速变化。该算法的输出根据最后一跳无线信号的特征,在MPTCP连接尝试时向路径管理器引入自适应策略。我们的实验在WiFi/LTE网络的真实数据集上运行,使用MPTCP的NS3实现,增强以更好地支持MPTCP路径管理控制。我们分析了MPTCP在各种网络条件下的吞吐量和延迟指标,发现上下文强盗MPTCP路径管理器的性能与我们评估实验中使用的基线相比有所提高。利用边缘计算技术,该模型可以在移动边缘计算服务器中实现,通过向移动设备通信给定无线电条件下的最佳路径来避免MPTCP路径管理问题。我们的评估表明,利用自适应上下文感知可以提高多个网络接口的利用率。
{"title":"A Contextual Bi-armed Bandit Approach for MPTCP Path Management in Heterogeneous LTE and WiFi Edge Networks","authors":"A. Alzadjali, Flavio Esposito, J. Deogun","doi":"10.1109/SEC50012.2020.00042","DOIUrl":"https://doi.org/10.1109/SEC50012.2020.00042","url":null,"abstract":"Multi-homed mobile devices are capable of aggregating traffic transmissions over heterogeneous networks. MultiPath TCP (MPTCP) is an evolution of TCP that allows the simultaneous use of multiple interfaces for a single connection. Despite the success of MPTCP, its deployment can be enhanced by controlling which network interface to be used as an initial path during the connectivity setup. In this paper, we proposed an online MPTCP path manager based on the contextual bandit algorithm to help choose the optimal primary path connection that maximizes throughput and minimizes delay and packet loss. The contextual bandit path manager deals with the rapid changes of multiple transmission paths in heterogeneous networks. The output of this algorithm introduces an adaptive policy to the path manager whenever the MPTCP connection is attempted based on the last hop wireless signals characteristics. Our experiments run over a real dataset of WiFi/LTE networks using NS3 implementation of MPTCP, enhanced to better support MPTCP path management control. We analyzed MPTCP’s throughput and latency metrics in various network conditions and found that the performance of the contextual bandit MPTCP path manager improved compared to the baselines used in our evaluation experiments. Utilizing edge computing technology, this model can be implemented in a mobile edge computing server to dodge MPTCP path management issues by communicating to the mobile equipment the best path for the given radio conditions. Our evaluation demonstrates that leveraging adaptive contextawareness improves the utilization of multiple network interfaces.","PeriodicalId":375577,"journal":{"name":"2020 IEEE/ACM Symposium on Edge Computing (SEC)","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124922854","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Fogify: A Fog Computing Emulation Framework 一个雾计算仿真框架
Pub Date : 2020-11-01 DOI: 10.1109/SEC50012.2020.00011
Moysis Symeonides, Z. Georgiou, Demetris Trihinas, G. Pallis, M. Dikaiakos
Fog Computing is emerging as the dominating paradigm bridging the compute and connectivity gap between sensing devices and latency-sensitive services. However, experimenting and evaluating IoT services is a daunting task involving the manual configuration and deployment of a mixture of geodistributed physical and virtual infrastructure with different resource and network requirements. This results in sub-optimal, costly and error-prone deployments due to numerous unexpected overheads not initially envisioned in the design phase and underwhelming testing conditions not resembling the end environment. In this paper, we introduce Fogify, an emulator easing the modeling, deployment and large-scale experimentation of fog and edge testbeds. Fogify provides a toolset to: (i) model complex fog topologies comprised of heterogeneous resources, network capabilities and QoS criteria; (ii) deploy the modelled configuration and services using popular containerized descriptions to a cloud or local environment; (iii) experiment, measure and evaluate the deployment by injecting faults and adapting the configuration at runtime to test different “what-if” scenarios that reveal the limitations of a service before introduced to the public. In the evaluation, proof-of-concept IoT services with real-world workloads are introduced to show the wide applicability and benefits of rapid prototyping via Fogify.
雾计算正在成为一种主流模式,它弥合了传感设备和延迟敏感服务之间的计算和连接差距。然而,试验和评估物联网服务是一项艰巨的任务,涉及手动配置和部署具有不同资源和网络需求的地理分布物理和虚拟基础设施的混合。这将导致次优的、昂贵的、容易出错的部署,这是由于在设计阶段没有最初设想到的大量意外开销,以及与最终环境不同的令人印象深刻的测试条件。在本文中,我们介绍了Fogify,一个仿真器,简化了雾和边缘测试平台的建模,部署和大规模实验。Fogify提供了一个工具集:(i)模拟由异构资源、网络能力和QoS标准组成的复杂雾拓扑;(ii)使用流行的容器化描述将建模的配置和服务部署到云或本地环境;(iii)通过在运行时注入故障和调整配置来测试不同的“假设”场景,从而在向公众介绍服务之前揭示服务的局限性,从而对部署进行实验、测量和评估。在评估中,引入了具有真实工作负载的概念验证物联网服务,以展示通过Fogify快速原型的广泛适用性和优势。
{"title":"Fogify: A Fog Computing Emulation Framework","authors":"Moysis Symeonides, Z. Georgiou, Demetris Trihinas, G. Pallis, M. Dikaiakos","doi":"10.1109/SEC50012.2020.00011","DOIUrl":"https://doi.org/10.1109/SEC50012.2020.00011","url":null,"abstract":"Fog Computing is emerging as the dominating paradigm bridging the compute and connectivity gap between sensing devices and latency-sensitive services. However, experimenting and evaluating IoT services is a daunting task involving the manual configuration and deployment of a mixture of geodistributed physical and virtual infrastructure with different resource and network requirements. This results in sub-optimal, costly and error-prone deployments due to numerous unexpected overheads not initially envisioned in the design phase and underwhelming testing conditions not resembling the end environment. In this paper, we introduce Fogify, an emulator easing the modeling, deployment and large-scale experimentation of fog and edge testbeds. Fogify provides a toolset to: (i) model complex fog topologies comprised of heterogeneous resources, network capabilities and QoS criteria; (ii) deploy the modelled configuration and services using popular containerized descriptions to a cloud or local environment; (iii) experiment, measure and evaluate the deployment by injecting faults and adapting the configuration at runtime to test different “what-if” scenarios that reveal the limitations of a service before introduced to the public. In the evaluation, proof-of-concept IoT services with real-world workloads are introduced to show the wide applicability and benefits of rapid prototyping via Fogify.","PeriodicalId":375577,"journal":{"name":"2020 IEEE/ACM Symposium on Edge Computing (SEC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125400186","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 36
Fooling Edge Computation Offloading via Stealthy Interference Attack 通过隐形干扰攻击欺骗边缘计算卸载
Pub Date : 2020-11-01 DOI: 10.1109/SEC50012.2020.00062
Letian Zhang, Jie Xu
There is a growing interest in developing deep learning methods to solve many resource management problems in wireless edge computing systems where model-based designs are infeasible. While deep learning is known to be vulnerable to adversarial example attacks, the security risk of learningbased designs in the context of edge computing is not well understood. In this paper, we propose and study a new adversarial example attack, called stealthy interference attack (SIA), in deep reinforcement learning (DRL)-based edge computation offloading systems. In SIA, the attacker exerts a carefully determined level of interference signal to change the input states of the DRL-based policy, thereby fooling the mobile device in selecting a target and compromised edge server for computation offloading while evading detection. Simulation results demonstrate the effectiveness of SIA, and show that our algorithm outperforms existing adversarial machine learning algorithms in terms of a higher attack success probability and a lower power consumption.
人们对开发深度学习方法来解决无线边缘计算系统中的许多资源管理问题越来越感兴趣,因为基于模型的设计是不可行的。虽然深度学习容易受到对抗性示例攻击,但边缘计算背景下基于学习的设计的安全风险尚未得到很好的理解。在本文中,我们提出并研究了基于深度强化学习(DRL)的边缘计算卸载系统中的一种新的对抗性示例攻击,称为隐形干扰攻击(SIA)。在SIA中,攻击者施加精心确定的干扰信号水平来改变基于drl的策略的输入状态,从而欺骗移动设备选择目标和受损的边缘服务器进行计算卸载,同时逃避检测。仿真结果证明了SIA的有效性,并表明我们的算法在更高的攻击成功率和更低的功耗方面优于现有的对抗性机器学习算法。
{"title":"Fooling Edge Computation Offloading via Stealthy Interference Attack","authors":"Letian Zhang, Jie Xu","doi":"10.1109/SEC50012.2020.00062","DOIUrl":"https://doi.org/10.1109/SEC50012.2020.00062","url":null,"abstract":"There is a growing interest in developing deep learning methods to solve many resource management problems in wireless edge computing systems where model-based designs are infeasible. While deep learning is known to be vulnerable to adversarial example attacks, the security risk of learningbased designs in the context of edge computing is not well understood. In this paper, we propose and study a new adversarial example attack, called stealthy interference attack (SIA), in deep reinforcement learning (DRL)-based edge computation offloading systems. In SIA, the attacker exerts a carefully determined level of interference signal to change the input states of the DRL-based policy, thereby fooling the mobile device in selecting a target and compromised edge server for computation offloading while evading detection. Simulation results demonstrate the effectiveness of SIA, and show that our algorithm outperforms existing adversarial machine learning algorithms in terms of a higher attack success probability and a lower power consumption.","PeriodicalId":375577,"journal":{"name":"2020 IEEE/ACM Symposium on Edge Computing (SEC)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128956395","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
期刊
2020 IEEE/ACM Symposium on Edge Computing (SEC)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1