首页 > 最新文献

Proceedings of the Second ACM/IEEE Symposium on Edge Computing最新文献

英文 中文
An empirical study of latency in an emerging class of edge computing applications for wearable cognitive assistance 一项针对可穿戴认知辅助的新兴边缘计算应用程序延迟的实证研究
Pub Date : 2017-10-12 DOI: 10.1145/3132211.3134458
Zhuo Chen, Wenlu Hu, Junjue Wang, Siyan Zhao, Brandon Amos, Guanhang Wu, Kiryong Ha, Khalid Elgazzar, P. Pillai, R. Klatzky, D. Siewiorek, M. Satyanarayanan
An emerging class of interactive wearable cognitive assistance applications is poised to become one of the key demonstrators of edge computing infrastructure. In this paper, we design seven such applications and evaluate their performance in terms of latency across a range of edge computing configurations, mobile hardware, and wireless networks, including 4G LTE. We also devise a novel multi-algorithm approach that leverages temporal locality to reduce end-to-end latency by 60% to 70%, without sacrificing accuracy. Finally, we derive target latencies for our applications, and show that edge computing is crucial to meeting these targets.
一种新兴的交互式可穿戴认知辅助应用有望成为边缘计算基础设施的关键示范之一。在本文中,我们设计了7个这样的应用程序,并在一系列边缘计算配置、移动硬件和无线网络(包括4G LTE)的延迟方面评估了它们的性能。我们还设计了一种新的多算法方法,利用时间局域性将端到端延迟减少60%到70%,而不牺牲准确性。最后,我们推导了应用程序的目标延迟,并表明边缘计算对于满足这些目标至关重要。
{"title":"An empirical study of latency in an emerging class of edge computing applications for wearable cognitive assistance","authors":"Zhuo Chen, Wenlu Hu, Junjue Wang, Siyan Zhao, Brandon Amos, Guanhang Wu, Kiryong Ha, Khalid Elgazzar, P. Pillai, R. Klatzky, D. Siewiorek, M. Satyanarayanan","doi":"10.1145/3132211.3134458","DOIUrl":"https://doi.org/10.1145/3132211.3134458","url":null,"abstract":"An emerging class of interactive wearable cognitive assistance applications is poised to become one of the key demonstrators of edge computing infrastructure. In this paper, we design seven such applications and evaluate their performance in terms of latency across a range of edge computing configurations, mobile hardware, and wireless networks, including 4G LTE. We also devise a novel multi-algorithm approach that leverages temporal locality to reduce end-to-end latency by 60% to 70%, without sacrificing accuracy. Finally, we derive target latencies for our applications, and show that edge computing is crucial to meeting these targets.","PeriodicalId":389022,"journal":{"name":"Proceedings of the Second ACM/IEEE Symposium on Edge Computing","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124421028","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 192
Privacy-preserving of platoon-based V2V in collaborative edge: poster abstract 协同边缘中基于队列的V2V隐私保护:海报摘要
Pub Date : 2017-10-12 DOI: 10.1145/3132211.3132464
Te-Chuan Chiu, Junshan Zhang, Ai-Chun Pang
Next generation 5G cellular networks is designed as a device-centric architecture, which provides not only human-based services but also machine-type communication (Internet of Things applications). To meet different kinds of next generation IoT service requirements such as automated driving, in this paper we discuss multiple automated vehicles, each of which owns processing and communication ability as a fog node and form into multiple groups as platoons to do pseudonym change procedure. Different from the traditional studies without platoon design concept, we consider "reaction time"- a practical platoon management factor which affecting intra-platoon spacing between the leading vehicle and the following vehicle, to tackle a platoon-aware pseudonym change problem for jointly achieving privacy gains and traffic efficiency among multiple platoons' cooperation at the edge.
下一代5G蜂窝网络被设计为以设备为中心的架构,不仅提供基于人的服务,还提供机器类型的通信(物联网应用)。为了满足自动驾驶等不同的下一代物联网服务需求,本文讨论了多辆自动驾驶汽车,每辆自动驾驶汽车作为一个雾节点具有处理和通信能力,并以排的形式组成多组进行假名更改过程。与传统研究中没有队列设计概念的研究不同,我们考虑了“反应时间”这一实际的队列管理因素,该因素影响着排内前车与后车之间的间距,解决了一个排感知的假名变更问题,以共同实现多排在边缘处的合作之间的隐私收益和交通效率。
{"title":"Privacy-preserving of platoon-based V2V in collaborative edge: poster abstract","authors":"Te-Chuan Chiu, Junshan Zhang, Ai-Chun Pang","doi":"10.1145/3132211.3132464","DOIUrl":"https://doi.org/10.1145/3132211.3132464","url":null,"abstract":"Next generation 5G cellular networks is designed as a device-centric architecture, which provides not only human-based services but also machine-type communication (Internet of Things applications). To meet different kinds of next generation IoT service requirements such as automated driving, in this paper we discuss multiple automated vehicles, each of which owns processing and communication ability as a fog node and form into multiple groups as platoons to do pseudonym change procedure. Different from the traditional studies without platoon design concept, we consider \"reaction time\"- a practical platoon management factor which affecting intra-platoon spacing between the leading vehicle and the following vehicle, to tackle a platoon-aware pseudonym change problem for jointly achieving privacy gains and traffic efficiency among multiple platoons' cooperation at the edge.","PeriodicalId":389022,"journal":{"name":"Proceedings of the Second ACM/IEEE Symposium on Edge Computing","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116054346","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A vehicle-based edge computing platform for transit and human mobility analytics 基于车辆的边缘计算平台,用于交通和人类移动分析
Pub Date : 2017-10-12 DOI: 10.1145/3132211.3134446
Bozhao Qi, Lei Kang, Suman Banerjee
This paper introduces Trellis --- a low-cost Wi-Fi-based in vehicle monitoring and tracking system that can passively observe mobile devices and provide various analytics about people both within and outside a vehicle which can lead to interesting population insights at a city scale. Our system runs on a vehicle-based edge computing platform and is a complementary mechanism which allows operators to collect various information, such as original-destination stations popular among passengers, occupancy of vehicles, pedestrian activity trends, and more. To conduct most of our analytics, we develop simple but effective algorithms that determine which device is actually inside (or outside) of a vehicle by leveraging some contextual information. While our current system does not provide accurate actual numbers of passengers and pedestrians, we expect the relative numbers and general trends to be fairly useful from an analytics perspective. We have deployed Trellis on a vehicle-based edge computing platform over a period of ten months, and have collected more than 30,000 miles of travel data spanning multiple bus routes. By combining our techniques, with bus schedule and weather information, we present a varied human mobility analysis across multiple aspects --- activity trends of passengers in transit systems; trends of pedestrians on city streets; and how external factors, e.g., temperature and weather, impact human outdoor activities. These observations demonstrate the usefulness of Trellis in proposed settings.
本文介绍了Trellis——一种基于wi - fi的低成本车辆监控和跟踪系统,它可以被动地观察移动设备,并提供关于车辆内外人员的各种分析,从而可以在城市规模上产生有趣的人口洞察。我们的系统运行在基于车辆的边缘计算平台上,是一种补充机制,允许运营商收集各种信息,例如受乘客欢迎的原始目的地站点、车辆占用情况、行人活动趋势等。为了进行大部分分析,我们开发了简单但有效的算法,通过利用一些上下文信息来确定哪个设备实际上在车辆内部(或外部)。虽然我们目前的系统不能提供准确的实际乘客和行人数量,但我们希望从分析的角度来看,相对数字和总体趋势是相当有用的。我们已经在一个基于车辆的边缘计算平台上部署了Trellis,并在10个月的时间里收集了超过30,000英里的旅行数据,跨越多条公交路线。通过将我们的技术与公交时刻表和天气信息相结合,我们在多个方面展示了不同的人类流动性分析——交通系统中乘客的活动趋势;城市街道上的行人趋势;以及温度和天气等外部因素如何影响人类的户外活动。这些观察结果证明了Trellis在建议设置中的有用性。
{"title":"A vehicle-based edge computing platform for transit and human mobility analytics","authors":"Bozhao Qi, Lei Kang, Suman Banerjee","doi":"10.1145/3132211.3134446","DOIUrl":"https://doi.org/10.1145/3132211.3134446","url":null,"abstract":"This paper introduces Trellis --- a low-cost Wi-Fi-based in vehicle monitoring and tracking system that can passively observe mobile devices and provide various analytics about people both within and outside a vehicle which can lead to interesting population insights at a city scale. Our system runs on a vehicle-based edge computing platform and is a complementary mechanism which allows operators to collect various information, such as original-destination stations popular among passengers, occupancy of vehicles, pedestrian activity trends, and more. To conduct most of our analytics, we develop simple but effective algorithms that determine which device is actually inside (or outside) of a vehicle by leveraging some contextual information. While our current system does not provide accurate actual numbers of passengers and pedestrians, we expect the relative numbers and general trends to be fairly useful from an analytics perspective. We have deployed Trellis on a vehicle-based edge computing platform over a period of ten months, and have collected more than 30,000 miles of travel data spanning multiple bus routes. By combining our techniques, with bus schedule and weather information, we present a varied human mobility analysis across multiple aspects --- activity trends of passengers in transit systems; trends of pedestrians on city streets; and how external factors, e.g., temperature and weather, impact human outdoor activities. These observations demonstrate the usefulness of Trellis in proposed settings.","PeriodicalId":389022,"journal":{"name":"Proceedings of the Second ACM/IEEE Symposium on Edge Computing","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122681711","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 40
Gremlin: scheduling interactions in vehicular computing Gremlin:车辆计算中的交互调度
Pub Date : 2017-10-12 DOI: 10.1145/3132211.3134450
Kyungmin Lee, J. Flinn, Brian D. Noble
Vehicular applications must not demand too much of a driver's attention. They often run in the background and initiate interactions with the driver to deliver important information. We argue that the vehicular computing system must schedule interactions by considering their priority, the attention they will demand, and how much attention the driver currently has to spare. Based on these considerations, it should either allow a given interaction or defer it. We describe a prototype called Gremlin that leverages edge computing infrastructure to help schedule interactions initiated by vehicular applications. It continuously performs four tasks: (1) monitoring driving conditions to estimate the driver's available attention, (2) recording interactions for analysis, (3) generating a user-specific quantitative model of the attention required for each distinct interaction, and (4) scheduling new interactions based on the above data. Gremlin performs the third task on edge computing infrastructure. Offload is attractive because the analysis is too computationally demanding to run on vehicular platforms. Since recording size for each interaction can be large, it is preferable to perform the offloaded computation at the edge of the network rather than in the cloud, and thereby conserve wide-area network bandwidth. We evaluate Gremlin by comparing its decisions to those recommended by a vehicular UI expert. Gremlin's decisions agree with the expert's over 90% of the time, much more frequently than the coarse-grained scheduling policies used by current vehicle systems. Further, we find that offloading of analysis to edge platforms reduces use of wide-area networks by an average of 15MB per analyzed interaction.
车辆应用不能要求驾驶员过多的注意力。它们通常在后台运行,并启动与驾驶员的交互,以传递重要信息。我们认为,车载计算系统必须通过考虑交互的优先级、需要的注意力以及驾驶员目前可以腾出的注意力来安排交互。基于这些考虑,它应该允许给定的交互,或者推迟它。我们描述了一个名为Gremlin的原型,它利用边缘计算基础设施来帮助调度车辆应用程序发起的交互。它连续执行四项任务:(1)监测驾驶条件以估计驾驶员的可用注意力;(2)记录交互以进行分析;(3)生成针对每个不同交互所需注意力的用户特定定量模型;(4)基于上述数据调度新的交互。Gremlin在边缘计算基础设施上执行第三项任务。卸载是有吸引力的,因为这种分析对计算的要求太高,无法在车载平台上运行。由于每次交互的记录大小可能很大,因此最好在网络边缘而不是在云中执行卸载计算,从而节省广域网络带宽。我们通过将Gremlin的决策与车辆UI专家推荐的决策进行比较来评估Gremlin。Gremlin的决策在90%以上的时间内与专家的决策一致,比当前车辆系统使用的粗粒度调度策略要频繁得多。此外,我们发现将分析卸载到边缘平台可以减少广域网的使用,每个分析的交互平均减少15MB。
{"title":"Gremlin: scheduling interactions in vehicular computing","authors":"Kyungmin Lee, J. Flinn, Brian D. Noble","doi":"10.1145/3132211.3134450","DOIUrl":"https://doi.org/10.1145/3132211.3134450","url":null,"abstract":"Vehicular applications must not demand too much of a driver's attention. They often run in the background and initiate interactions with the driver to deliver important information. We argue that the vehicular computing system must schedule interactions by considering their priority, the attention they will demand, and how much attention the driver currently has to spare. Based on these considerations, it should either allow a given interaction or defer it. We describe a prototype called Gremlin that leverages edge computing infrastructure to help schedule interactions initiated by vehicular applications. It continuously performs four tasks: (1) monitoring driving conditions to estimate the driver's available attention, (2) recording interactions for analysis, (3) generating a user-specific quantitative model of the attention required for each distinct interaction, and (4) scheduling new interactions based on the above data. Gremlin performs the third task on edge computing infrastructure. Offload is attractive because the analysis is too computationally demanding to run on vehicular platforms. Since recording size for each interaction can be large, it is preferable to perform the offloaded computation at the edge of the network rather than in the cloud, and thereby conserve wide-area network bandwidth. We evaluate Gremlin by comparing its decisions to those recommended by a vehicular UI expert. Gremlin's decisions agree with the expert's over 90% of the time, much more frequently than the coarse-grained scheduling policies used by current vehicle systems. Further, we find that offloading of analysis to edge platforms reduces use of wide-area networks by an average of 15MB per analyzed interaction.","PeriodicalId":389022,"journal":{"name":"Proceedings of the Second ACM/IEEE Symposium on Edge Computing","volume":"65 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129630483","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 19
PredriveID: pre-trip driver identification from in-vehicle data PredriveID:从车载数据中获取的出行前驾驶员识别
Pub Date : 2017-10-12 DOI: 10.1145/3132211.3134462
Gorkem Kar, Shubham Jain, M. Gruteser, Jinzhu Chen, F. Bai, R. Govindan
This paper explores the minimal dataset necessary at vehicular edge nodes, to effectively differentiate drivers using data from existing in-vehicle sensors. This facilitates novel personalization, insurance, advertising, and security applications but can also help in understanding the privacy sensitivity of such data. Existing work on differentiating drivers largely relies on devices that drivers carry, or on the locations that drivers visit to distinguish drivers. Internally, however, the vehicle processes a much richer set of sensor information that is becoming increasingly available to external services. To explore how easily drivers can be distinguished from such data, we consider a system that interfaces to the vehicle bus and executes supervised or unsupervised driver differentiation techniques on this data. To facilitate this analysis and to evaluate the system, we collect in-vehicle data from 24 drivers on a controlled campus test route, as well as 480 trips over three weeks from five shared university mail vans. We also conduct studies between members of a family. The results show that driver differentiation does not require longer sequences of driving telemetry data but can be accomplished with 91% accuracy within 20s after the driver enters the vehicle, usually even before the vehicle starts moving.
本文探讨了车辆边缘节点所需的最小数据集,以便使用现有车载传感器的数据有效区分驾驶员。这有利于新颖的个性化、保险、广告和安全应用程序,但也有助于理解此类数据的隐私敏感性。现有的区分司机的工作很大程度上依赖于司机携带的设备,或者司机去的地方来区分司机。然而,车辆内部处理的传感器信息更加丰富,这些信息越来越多地可供外部服务使用。为了探索如何容易地从这些数据中区分驾驶员,我们考虑了一个系统,该系统与车辆总线接口,并对这些数据执行监督或无监督驾驶员区分技术。为了便于分析和评估该系统,我们从一条受控的校园测试路线上的24名司机那里收集了车内数据,以及五辆共享的大学邮车在三周内的480次旅行。我们也在家庭成员之间进行研究。结果表明,驾驶员识别不需要更长的驾驶遥测数据序列,在驾驶员进入车辆后的20秒内,通常甚至在车辆开始移动之前,就可以以91%的准确率完成。
{"title":"PredriveID: pre-trip driver identification from in-vehicle data","authors":"Gorkem Kar, Shubham Jain, M. Gruteser, Jinzhu Chen, F. Bai, R. Govindan","doi":"10.1145/3132211.3134462","DOIUrl":"https://doi.org/10.1145/3132211.3134462","url":null,"abstract":"This paper explores the minimal dataset necessary at vehicular edge nodes, to effectively differentiate drivers using data from existing in-vehicle sensors. This facilitates novel personalization, insurance, advertising, and security applications but can also help in understanding the privacy sensitivity of such data. Existing work on differentiating drivers largely relies on devices that drivers carry, or on the locations that drivers visit to distinguish drivers. Internally, however, the vehicle processes a much richer set of sensor information that is becoming increasingly available to external services. To explore how easily drivers can be distinguished from such data, we consider a system that interfaces to the vehicle bus and executes supervised or unsupervised driver differentiation techniques on this data. To facilitate this analysis and to evaluate the system, we collect in-vehicle data from 24 drivers on a controlled campus test route, as well as 480 trips over three weeks from five shared university mail vans. We also conduct studies between members of a family. The results show that driver differentiation does not require longer sequences of driving telemetry data but can be accomplished with 91% accuracy within 20s after the driver enters the vehicle, usually even before the vehicle starts moving.","PeriodicalId":389022,"journal":{"name":"Proceedings of the Second ACM/IEEE Symposium on Edge Computing","volume":"241 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116053456","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 20
Towards efficient edge cloud augmentation for virtual reality MMOGs 面向虚拟现实mmog的高效边缘云增强
Pub Date : 2017-10-12 DOI: 10.1145/3132211.3134463
Wuyang Zhang, Jiachen Chen, Yanyong Zhang, D. Raychaudhuri
With the popularity of Massively Multiplayer Online Games (MMOGs) and Virtual Reality (VR) technologies, VR-MMOGs are developing quickly, demanding ever faster gaming interactions and image rendering. In this paper, we identify three main challenges of VR-MMOGs: (1)a stringent latency requirement for frequent local view change responses, (2) a high bandwidth requirement for constant refreshing, and (3)a large scale requirement for a large number of simultaneous players. Understanding that a cloud-centric gaming architecture may struggle to deliver the latency/bandwidth requirements, the game development community is attempting to leverage edge cloud computing. However, one problem remains unsolved: how to distribute the work among the user device, the edge clouds, and the center cloud to meet all three requirements especially when users are mobile. In this paper, we propose a hybrid gaming architecture that achieves clever work distribution. It places local view change updates on edge clouds for immediate responses, frame rendering on edge clouds for high bandwidth, and global game state updates on the center cloud for user scalability. In addition, we propose an efficient service placement algorithm based on a Markov decision process. This algorithm dynamically places a user's gaming service on edge clouds while the user moves through different access points. It also co-places multiple users to facilitate game world sharing and reduce the overall migration overhead. We derive optimal solutions and devise efficient heuristic approaches. We also study different algorithm implementations to speed up the runtime. Through detailed simulation studies, we validate our placement algorithms and also show that our architecture has the potential to meet all three requirements of VR-MMOGs.
随着大型多人在线游戏(mmog)和虚拟现实(VR)技术的普及,VR- mmog正在迅速发展,对游戏交互和图像渲染的要求越来越高。在本文中,我们确定了vr - mmog的三个主要挑战:(1)对频繁的局部视图变化响应的严格延迟要求,(2)对持续刷新的高带宽要求,以及(3)对大量同时玩家的大规模要求。了解到以云为中心的游戏架构可能难以满足延迟/带宽需求,游戏开发社区正在尝试利用边缘云计算。然而,有一个问题仍然没有解决:如何在用户设备、边缘云和中心云中分配工作,以满足所有这三个需求,特别是当用户是移动的时候。在本文中,我们提出了一种实现智能工作分配的混合游戏架构。它将本地视图更改更新放在边缘云上以获得即时响应,将帧渲染放在边缘云上以获得高带宽,并将全局游戏状态更新放在中心云上以获得用户可扩展性。此外,我们还提出了一种基于马尔可夫决策过程的高效服务放置算法。当用户通过不同的接入点移动时,该算法动态地将用户的游戏服务放置在边缘云上。它还将多个用户放在一起,以促进游戏世界共享并减少总体迁移开销。我们推导出最优解并设计出有效的启发式方法。我们还研究了不同的算法实现来加快运行时间。通过详细的仿真研究,我们验证了我们的放置算法,并表明我们的架构有潜力满足vr - mmog的所有三个要求。
{"title":"Towards efficient edge cloud augmentation for virtual reality MMOGs","authors":"Wuyang Zhang, Jiachen Chen, Yanyong Zhang, D. Raychaudhuri","doi":"10.1145/3132211.3134463","DOIUrl":"https://doi.org/10.1145/3132211.3134463","url":null,"abstract":"With the popularity of Massively Multiplayer Online Games (MMOGs) and Virtual Reality (VR) technologies, VR-MMOGs are developing quickly, demanding ever faster gaming interactions and image rendering. In this paper, we identify three main challenges of VR-MMOGs: (1)a stringent latency requirement for frequent local view change responses, (2) a high bandwidth requirement for constant refreshing, and (3)a large scale requirement for a large number of simultaneous players. Understanding that a cloud-centric gaming architecture may struggle to deliver the latency/bandwidth requirements, the game development community is attempting to leverage edge cloud computing. However, one problem remains unsolved: how to distribute the work among the user device, the edge clouds, and the center cloud to meet all three requirements especially when users are mobile. In this paper, we propose a hybrid gaming architecture that achieves clever work distribution. It places local view change updates on edge clouds for immediate responses, frame rendering on edge clouds for high bandwidth, and global game state updates on the center cloud for user scalability. In addition, we propose an efficient service placement algorithm based on a Markov decision process. This algorithm dynamically places a user's gaming service on edge clouds while the user moves through different access points. It also co-places multiple users to facilitate game world sharing and reduce the overall migration overhead. We derive optimal solutions and devise efficient heuristic approaches. We also study different algorithm implementations to speed up the runtime. Through detailed simulation studies, we validate our placement algorithms and also show that our architecture has the potential to meet all three requirements of VR-MMOGs.","PeriodicalId":389022,"journal":{"name":"Proceedings of the Second ACM/IEEE Symposium on Edge Computing","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129330383","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 81
DeServE: delay-agnostic service offloading in mobile edge clouds: poster 值得:移动边缘云中的延迟不可知服务卸载:海报
Pub Date : 2017-10-12 DOI: 10.1145/3132211.3132454
Amit Samanta, Yong Li
The mobile edge computing platform [2, 3] generally measured the delay based on the total time required to offload the services of edge devices to edge servers. On the other hand, the service delay is also depended on the response time of edge services from different mobile applications. Hence, there exists a disparity between the actual and measured service delay experienced by the mobile edge devices. This type problem arises due to the fact that the wireless radio network infrastructure is shared by the multiple mobile devices. As a result, the congestion occurs in the network and the arrival packets are likely to be dropped by the network switches in the access network. In such situation, the real-time critical edge applications mostly care about the quality-of-service (QoS) for different mobile devices, while offloading the edge services to edge servers. Thus, we improve the QoS of edge devices using delay-agnostic service offloading scheme to meet the offloading requirements of those delay-agnostic applications. Prior works have devoted to filling the gap at edge network by introducing the priority-based computational offloading [1] or energy-efficient resource allocation [4] schemes for edge computing applications. However, the priority-based service offloading and resource allocation do not provide the fair QoS to mobile devices, as the actual bottleneck is usually existed in the delay-agnostic nature of mobile applications. Therefore, although the existing offloading schemes perform well for multi-modal applications in terms of energy efficiency and computational overhead [1]. However, in those works, they assumed that the delay requirement of services to be fixed, but in real-life the delay requirement of services may vary radically. This type of situation is considered to be delay-agnostic property for edge devices, in this poster. Looking at the above points, we propose DESERVE, which introduces a delay-agnostic service offloading scheme for mobile edge computing platform in order to improve the QoS of each individual. The overview of DESERVE is depicted in Figure 1. For the edge devices, the flexible and optimal resource allocation technique is exploited for delay-agnostic service offloading scheme in edge computing platform. The resource allocation technique is leveraged for edge devices using the advanced techniques of software defined networks (SDN). However, an adaptive service identifier is deployed specifically for the identification of critical edge applications at the edge of mobile edge platform, instead of installing centralized SDN controller. After the identification of critical edge services, the identified services are forwarded to the controller and the corresponding rules to be offloaded for those services in the edge servers of edge platform.
移动边缘计算平台[2,3]通常根据将边缘设备的业务卸载到边缘服务器所需的总时间来衡量延迟。另一方面,服务延迟还取决于来自不同移动应用程序的边缘服务的响应时间。因此,移动边缘设备所经历的实际服务延迟与测量服务延迟之间存在差异。这种类型的问题是由于无线无线网络基础设施是由多个移动设备共享的。这样会导致网络拥塞,到达的数据包很可能被接入网中的网络交换机丢弃。在这种情况下,实时关键边缘应用程序主要关注不同移动设备的服务质量(QoS),而将边缘服务卸载到边缘服务器。因此,我们使用延迟不可知的服务卸载方案来提高边缘设备的QoS,以满足这些延迟不可知应用的卸载需求。先前的工作致力于通过引入边缘计算应用的基于优先级的计算卸载[1]或节能资源分配[4]方案来填补边缘网络的空白。然而,基于优先级的服务卸载和资源分配并不能为移动设备提供公平的QoS,因为实际的瓶颈通常存在于移动应用的延迟不可知特性中。因此,尽管现有的卸载方案在能源效率和计算开销方面对多模态应用表现良好[1]。然而,在这些作品中,他们假设服务的延迟需求是固定的,但在现实生活中,服务的延迟需求可能会发生根本变化。在这张海报中,这种情况被认为是边缘设备的延迟不可知属性。考虑到以上几点,我们提出了DESERVE,它为移动边缘计算平台引入了一种延迟不可知的服务卸载方案,以提高每个个体的QoS。DESERVE的概述如图1所示。针对边缘设备,利用灵活优化的资源分配技术,实现了边缘计算平台中时延不可知的业务卸载方案。利用软件定义网络(SDN)的先进技术,将资源分配技术用于边缘设备。但是,在移动边缘平台的边缘部署了专门用于识别关键边缘应用的自适应业务标识符,而不是安装集中式SDN控制器。识别出关键边缘服务后,将识别出的关键边缘服务转发给控制器,并在边缘平台的边缘服务器上卸载相应的边缘服务规则。
{"title":"DeServE: delay-agnostic service offloading in mobile edge clouds: poster","authors":"Amit Samanta, Yong Li","doi":"10.1145/3132211.3132454","DOIUrl":"https://doi.org/10.1145/3132211.3132454","url":null,"abstract":"The mobile edge computing platform [2, 3] generally measured the delay based on the total time required to offload the services of edge devices to edge servers. On the other hand, the service delay is also depended on the response time of edge services from different mobile applications. Hence, there exists a disparity between the actual and measured service delay experienced by the mobile edge devices. This type problem arises due to the fact that the wireless radio network infrastructure is shared by the multiple mobile devices. As a result, the congestion occurs in the network and the arrival packets are likely to be dropped by the network switches in the access network. In such situation, the real-time critical edge applications mostly care about the quality-of-service (QoS) for different mobile devices, while offloading the edge services to edge servers. Thus, we improve the QoS of edge devices using delay-agnostic service offloading scheme to meet the offloading requirements of those delay-agnostic applications. Prior works have devoted to filling the gap at edge network by introducing the priority-based computational offloading [1] or energy-efficient resource allocation [4] schemes for edge computing applications. However, the priority-based service offloading and resource allocation do not provide the fair QoS to mobile devices, as the actual bottleneck is usually existed in the delay-agnostic nature of mobile applications. Therefore, although the existing offloading schemes perform well for multi-modal applications in terms of energy efficiency and computational overhead [1]. However, in those works, they assumed that the delay requirement of services to be fixed, but in real-life the delay requirement of services may vary radically. This type of situation is considered to be delay-agnostic property for edge devices, in this poster. Looking at the above points, we propose DESERVE, which introduces a delay-agnostic service offloading scheme for mobile edge computing platform in order to improve the QoS of each individual. The overview of DESERVE is depicted in Figure 1. For the edge devices, the flexible and optimal resource allocation technique is exploited for delay-agnostic service offloading scheme in edge computing platform. The resource allocation technique is leveraged for edge devices using the advanced techniques of software defined networks (SDN). However, an adaptive service identifier is deployed specifically for the identification of critical edge applications at the edge of mobile edge platform, instead of installing centralized SDN controller. After the identification of critical edge services, the identified services are forwarded to the controller and the corresponding rules to be offloaded for those services in the edge servers of edge platform.","PeriodicalId":389022,"journal":{"name":"Proceedings of the Second ACM/IEEE Symposium on Edge Computing","volume":"74 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125643595","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Fast transparent virtual machine migration in distributed edge clouds 分布式边缘云中的快速透明虚拟机迁移
Pub Date : 2017-10-12 DOI: 10.1145/3132211.3134445
Lucas Chaufournier, Prateek Sharma, Franck Le, E. Nahum, P. Shenoy, D. Towsley
Edge clouds are emerging as a popular paradigm of computation. In edge clouds, computation and storage can be distributed across a large number of locations, allowing applications to be hosted at the edge of the network close to the end-users. Virtual machine live migration is a key mechanism which enables applications to be nimble and nomadic as they respond to changing user locations and workload. However, VM live migration in edge clouds poses a number of challenges. Migrating VMs between geographically separate locations over slow wide-area network links results in large migration times and high unavailability of the application. This is due to network reconfiguration delays as user traffic is redirected to the newly migrated location. In this paper, we propose the use of multi-path TCP to both improve VM migration time and network transparency of applications. We evaluate our approach in a commercial public cloud environment and an emulated lab based edge cloud testbed using a variety of network conditions and show that our approach can reduce migration times by up to 2X while virtually eliminating downtimes for most applications.
边缘云正在成为一种流行的计算范式。在边缘云中,计算和存储可以分布在大量位置,从而允许将应用程序托管在靠近最终用户的网络边缘。虚拟机实时迁移是一种关键机制,它使应用程序在响应不断变化的用户位置和工作负载时变得灵活多变。然而,边缘云中的虚拟机实时迁移带来了许多挑战。通过缓慢的广域网链路在不同地理位置之间迁移vm会导致迁移时间长,应用程序不可用性高。这是由于网络重新配置延迟,因为用户流量被重定向到新迁移的位置。在本文中,我们提出使用多路径TCP来改善VM迁移时间和应用程序的网络透明度。我们在商业公共云环境和基于模拟实验室的边缘云测试平台中使用各种网络条件评估了我们的方法,并表明我们的方法可以将迁移时间减少多达2倍,同时几乎消除了大多数应用程序的停机时间。
{"title":"Fast transparent virtual machine migration in distributed edge clouds","authors":"Lucas Chaufournier, Prateek Sharma, Franck Le, E. Nahum, P. Shenoy, D. Towsley","doi":"10.1145/3132211.3134445","DOIUrl":"https://doi.org/10.1145/3132211.3134445","url":null,"abstract":"Edge clouds are emerging as a popular paradigm of computation. In edge clouds, computation and storage can be distributed across a large number of locations, allowing applications to be hosted at the edge of the network close to the end-users. Virtual machine live migration is a key mechanism which enables applications to be nimble and nomadic as they respond to changing user locations and workload. However, VM live migration in edge clouds poses a number of challenges. Migrating VMs between geographically separate locations over slow wide-area network links results in large migration times and high unavailability of the application. This is due to network reconfiguration delays as user traffic is redirected to the newly migrated location. In this paper, we propose the use of multi-path TCP to both improve VM migration time and network transparency of applications. We evaluate our approach in a commercial public cloud environment and an emulated lab based edge cloud testbed using a variety of network conditions and show that our approach can reduce migration times by up to 2X while virtually eliminating downtimes for most applications.","PeriodicalId":389022,"journal":{"name":"Proceedings of the Second ACM/IEEE Symposium on Edge Computing","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121367926","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 43
Parkmaster: an in-vehicle, edge-based video analytics service for detecting open parking spaces in urban environments Parkmaster:一款基于边缘的车载视频分析服务,用于检测城市环境中的开放停车位
Pub Date : 2017-10-12 DOI: 10.1145/3132211.3134452
Giulio Grassi, K. Jamieson, P. Bahl, G. Pau
We present the design and implementation of ParkMaster, a system that leverages the ubiquitous smartphone to help drivers find parking spaces in the urban environment. ParkMaster estimates parking space availability using video gleaned from drivers' dash-mounted smartphones on the network's edge, uploading analytics about the street to the cloud in real time as participants drive. Novel lightweight parked-car localization algorithms enable the system to estimate each parked car's approximate location by fusing information from phone's camera, GPS, and inertial sensors, tracking and counting parked cars as they move through the driving car's camera frame of view. To visually calibrate the system, ParkMaster relies only on the size of well-known objects in the urban environment for on-the-go calibration. We implement and deploy ParkMaster on Android smartphones, uploading parking analytics to the Azure cloud. On-the-road experiments in three different environments comprising Los Angeles, Paris and an Italian village measure the end-to-end accuracy of the system's parking estimates (close to 90%) as well as the amount of cellular data usage the system requires (less than one mega-byte per hour). Drill-down microbenchmarks then analyze the factors contributing to this end-to-end performance, as video resolution, vision algorithm parameters, and CPU resources.
我们介绍ParkMaster的设计和实现,这是一个利用无处不在的智能手机帮助司机在城市环境中找到停车位的系统。ParkMaster利用从驾驶员安装在网络边缘的仪表盘上的智能手机上收集的视频来估计停车位的可用性,并在参与者开车时将有关街道的分析数据实时上传到云端。新颖的轻型停车定位算法使系统能够通过融合手机摄像头、GPS和惯性传感器的信息来估计每辆停放的汽车的大致位置,跟踪和计数在行驶车辆的摄像头视野内移动的停放车辆。为了在视觉上校准系统,ParkMaster只依赖于城市环境中已知物体的大小进行实时校准。我们在安卓智能手机上实施和部署ParkMaster,将停车分析上传到Azure云。在三种不同的环境中进行的道路实验,包括洛杉矶、巴黎和意大利的一个村庄,测试了系统对停车估计的端到端准确性(接近90%)以及系统所需的蜂窝数据使用量(每小时少于1兆字节)。向下钻取微基准测试,然后分析影响端到端性能的因素,如视频分辨率、视觉算法参数和CPU资源。
{"title":"Parkmaster: an in-vehicle, edge-based video analytics service for detecting open parking spaces in urban environments","authors":"Giulio Grassi, K. Jamieson, P. Bahl, G. Pau","doi":"10.1145/3132211.3134452","DOIUrl":"https://doi.org/10.1145/3132211.3134452","url":null,"abstract":"We present the design and implementation of ParkMaster, a system that leverages the ubiquitous smartphone to help drivers find parking spaces in the urban environment. ParkMaster estimates parking space availability using video gleaned from drivers' dash-mounted smartphones on the network's edge, uploading analytics about the street to the cloud in real time as participants drive. Novel lightweight parked-car localization algorithms enable the system to estimate each parked car's approximate location by fusing information from phone's camera, GPS, and inertial sensors, tracking and counting parked cars as they move through the driving car's camera frame of view. To visually calibrate the system, ParkMaster relies only on the size of well-known objects in the urban environment for on-the-go calibration. We implement and deploy ParkMaster on Android smartphones, uploading parking analytics to the Azure cloud. On-the-road experiments in three different environments comprising Los Angeles, Paris and an Italian village measure the end-to-end accuracy of the system's parking estimates (close to 90%) as well as the amount of cellular data usage the system requires (less than one mega-byte per hour). Drill-down microbenchmarks then analyze the factors contributing to this end-to-end performance, as video resolution, vision algorithm parameters, and CPU resources.","PeriodicalId":389022,"journal":{"name":"Proceedings of the Second ACM/IEEE Symposium on Edge Computing","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115502937","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 58
Real-time traffic estimation at vehicular edge nodes 车辆边缘节点实时交通估计
Pub Date : 2017-10-12 DOI: 10.1145/3132211.3134461
Gorkem Kar, Shubham Jain, M. Gruteser, F. Bai, R. Govindan
Traffic estimation has been a long-studied problem, but prior work has mostly provided coarse estimates over large areas. This work proposes effective fine-grained traffic volume estimation using in-vehicle dashboard mounted cameras. Existing work on traffic estimation relies on static traffic cameras that are usually deployed at crowded intersections and at some traffic lights. For streets with no traffic cameras, some well-known navigation apps (e.g., Google Maps, Waze) are often used to get the traffic information but these applications depend on limited number of GPS traces to estimate speed, and therefore may not show the average speed experienced by every vehicle. Moreover, they do not give any information about the number of vehicles traveling on the road. In this work, we focus on harvesting vehicles as edge compute nodes, focusing on sensing and interpretation of traffic from live video streams. With this goal, we consider a system that uses the dash-cam video collected on a drive, and executes object detection and identification techniques on this data to detect and count vehicles. We use image processing techniques to estimate the lane of traveling and speed of vehicles in real-time. To evaluate this system, we recorded several trips on a major highway and a university road. The results show that vehicle count accuracy depends on traffic conditions heavily but even during the peak hours, we achieve more than 90% counting accuracy for the vehicles traveling in the left most lane. For the detected vehicles, results show that our speed estimation gives less than 10% error across diverse roads and traffic conditions, and over 91% lane estimation accuracy for vehicles traveling in the left most lane (i.e., the passing lane).
交通估计是一个长期研究的问题,但以前的工作大多是在大范围内提供粗略的估计。这项工作提出了有效的细粒度交通量估计使用车载仪表板安装的摄像头。现有的交通估计工作依赖于静态交通摄像头,这些摄像头通常部署在拥挤的十字路口和一些红绿灯处。对于没有交通摄像头的街道,一些知名的导航应用程序(例如谷歌Maps, Waze)通常用于获取交通信息,但这些应用程序依赖于有限数量的GPS轨迹来估计速度,因此可能无法显示每辆车的平均速度。此外,他们没有提供任何关于道路上行驶车辆数量的信息。在这项工作中,我们专注于收集车辆作为边缘计算节点,专注于从实时视频流中感知和解释流量。为了实现这一目标,我们考虑了一个系统,该系统使用从驱动器上收集的行车记录仪视频,并对这些数据执行目标检测和识别技术,以检测和计数车辆。我们使用图像处理技术来实时估计车辆的行驶车道和速度。为了评估这个系统,我们在一条主要高速公路和一条大学公路上记录了几次旅行。结果表明,车辆计数精度在很大程度上取决于交通状况,但即使在高峰时段,我们对行驶在最左侧车道的车辆的计数精度也达到90%以上。对于检测到的车辆,结果表明,我们的速度估计在不同的道路和交通条件下误差小于10%,对于行驶在最左边车道(即通过车道)的车辆,车道估计精度超过91%。
{"title":"Real-time traffic estimation at vehicular edge nodes","authors":"Gorkem Kar, Shubham Jain, M. Gruteser, F. Bai, R. Govindan","doi":"10.1145/3132211.3134461","DOIUrl":"https://doi.org/10.1145/3132211.3134461","url":null,"abstract":"Traffic estimation has been a long-studied problem, but prior work has mostly provided coarse estimates over large areas. This work proposes effective fine-grained traffic volume estimation using in-vehicle dashboard mounted cameras. Existing work on traffic estimation relies on static traffic cameras that are usually deployed at crowded intersections and at some traffic lights. For streets with no traffic cameras, some well-known navigation apps (e.g., Google Maps, Waze) are often used to get the traffic information but these applications depend on limited number of GPS traces to estimate speed, and therefore may not show the average speed experienced by every vehicle. Moreover, they do not give any information about the number of vehicles traveling on the road. In this work, we focus on harvesting vehicles as edge compute nodes, focusing on sensing and interpretation of traffic from live video streams. With this goal, we consider a system that uses the dash-cam video collected on a drive, and executes object detection and identification techniques on this data to detect and count vehicles. We use image processing techniques to estimate the lane of traveling and speed of vehicles in real-time. To evaluate this system, we recorded several trips on a major highway and a university road. The results show that vehicle count accuracy depends on traffic conditions heavily but even during the peak hours, we achieve more than 90% counting accuracy for the vehicles traveling in the left most lane. For the detected vehicles, results show that our speed estimation gives less than 10% error across diverse roads and traffic conditions, and over 91% lane estimation accuracy for vehicles traveling in the left most lane (i.e., the passing lane).","PeriodicalId":389022,"journal":{"name":"Proceedings of the Second ACM/IEEE Symposium on Edge Computing","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128925042","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 33
期刊
Proceedings of the Second ACM/IEEE Symposium on Edge Computing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1