低延迟流处理的成本感知和容错地理分布式边缘计算

Jinlai Xu, Balaji Palanisamy
{"title":"低延迟流处理的成本感知和容错地理分布式边缘计算","authors":"Jinlai Xu, Balaji Palanisamy","doi":"10.1109/CIC52973.2021.00026","DOIUrl":null,"url":null,"abstract":"The number of Internet-of-Things (IoT) devices is rapidly increasing with the growth of IoT applications in various domains. As IoT applications have a strong demand for low latency and high throughput computing, stream processing using edge computing resources is a promising approach to support low latency processing of large-scale data. Edge-based stream processing extends the capability of cloud-based stream processing by processing the data streams near the edge of the network. In this vision paper, we discuss a distributed stream processing framework that optimizes the performance of stream processing applications through a careful allocation of geo-distributed computing and network resources available in edge computing environments. The framework includes key optimizations in both the platform layer and the infrastructure layer. While the platform layer is responsible for converting the user program into a stream processing physical plan and optimizing the physical plan and operator placement, the infrastructure layer is responsible for provisioning geo-distributed resources to the platform layer. The framework optimizes the performance of stream query processing at the platform layer through its careful consideration of data locality and resource constraints during physical plan generation and operator placement and by incorporating resilience to deal with failures. The framework also includes techniques to dynamically determine the level of parallelism to adapt to changing workload conditions. At the infrastructure layer, the framework includes a novel model for allocating computing resources in edge and geo-distributed cloud computing environments by carefully considering latency and cost. End users benefit from the platform through reduced cost and improved user experience in terms of response time and latency.","PeriodicalId":170121,"journal":{"name":"2021 IEEE 7th International Conference on Collaboration and Internet Computing (CIC)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Cost-aware & Fault-tolerant Geo-distributed Edge Computing for Low-latency Stream Processing\",\"authors\":\"Jinlai Xu, Balaji Palanisamy\",\"doi\":\"10.1109/CIC52973.2021.00026\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The number of Internet-of-Things (IoT) devices is rapidly increasing with the growth of IoT applications in various domains. As IoT applications have a strong demand for low latency and high throughput computing, stream processing using edge computing resources is a promising approach to support low latency processing of large-scale data. Edge-based stream processing extends the capability of cloud-based stream processing by processing the data streams near the edge of the network. In this vision paper, we discuss a distributed stream processing framework that optimizes the performance of stream processing applications through a careful allocation of geo-distributed computing and network resources available in edge computing environments. The framework includes key optimizations in both the platform layer and the infrastructure layer. While the platform layer is responsible for converting the user program into a stream processing physical plan and optimizing the physical plan and operator placement, the infrastructure layer is responsible for provisioning geo-distributed resources to the platform layer. The framework optimizes the performance of stream query processing at the platform layer through its careful consideration of data locality and resource constraints during physical plan generation and operator placement and by incorporating resilience to deal with failures. The framework also includes techniques to dynamically determine the level of parallelism to adapt to changing workload conditions. At the infrastructure layer, the framework includes a novel model for allocating computing resources in edge and geo-distributed cloud computing environments by carefully considering latency and cost. End users benefit from the platform through reduced cost and improved user experience in terms of response time and latency.\",\"PeriodicalId\":170121,\"journal\":{\"name\":\"2021 IEEE 7th International Conference on Collaboration and Internet Computing (CIC)\",\"volume\":\"34 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-12-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 IEEE 7th International Conference on Collaboration and Internet Computing (CIC)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/CIC52973.2021.00026\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE 7th International Conference on Collaboration and Internet Computing (CIC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CIC52973.2021.00026","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

随着物联网在各个领域应用的增长,物联网设备的数量正在迅速增加。由于物联网应用对低延迟和高吞吐量计算的强烈需求,使用边缘计算资源进行流处理是支持大规模数据低延迟处理的一种很有前途的方法。基于边缘的流处理通过处理网络边缘附近的数据流,扩展了基于云的流处理的能力。在这篇远景论文中,我们讨论了一个分布式流处理框架,该框架通过仔细分配边缘计算环境中可用的地理分布式计算和网络资源来优化流处理应用程序的性能。该框架包括平台层和基础设施层的关键优化。平台层负责将用户程序转换为流处理物理计划,并优化物理计划和操作员位置,基础设施层负责向平台层提供地理分布的资源。该框架通过在物理计划生成和操作符放置期间仔细考虑数据位置和资源约束,并结合弹性来处理故障,优化了平台层流查询处理的性能。该框架还包括动态确定并行性级别以适应不断变化的工作负载条件的技术。在基础设施层,该框架包括一个新的模型,通过仔细考虑延迟和成本,在边缘和地理分布式云计算环境中分配计算资源。终端用户通过降低成本和改进响应时间和延迟方面的用户体验而受益于该平台。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Cost-aware & Fault-tolerant Geo-distributed Edge Computing for Low-latency Stream Processing
The number of Internet-of-Things (IoT) devices is rapidly increasing with the growth of IoT applications in various domains. As IoT applications have a strong demand for low latency and high throughput computing, stream processing using edge computing resources is a promising approach to support low latency processing of large-scale data. Edge-based stream processing extends the capability of cloud-based stream processing by processing the data streams near the edge of the network. In this vision paper, we discuss a distributed stream processing framework that optimizes the performance of stream processing applications through a careful allocation of geo-distributed computing and network resources available in edge computing environments. The framework includes key optimizations in both the platform layer and the infrastructure layer. While the platform layer is responsible for converting the user program into a stream processing physical plan and optimizing the physical plan and operator placement, the infrastructure layer is responsible for provisioning geo-distributed resources to the platform layer. The framework optimizes the performance of stream query processing at the platform layer through its careful consideration of data locality and resource constraints during physical plan generation and operator placement and by incorporating resilience to deal with failures. The framework also includes techniques to dynamically determine the level of parallelism to adapt to changing workload conditions. At the infrastructure layer, the framework includes a novel model for allocating computing resources in edge and geo-distributed cloud computing environments by carefully considering latency and cost. End users benefit from the platform through reduced cost and improved user experience in terms of response time and latency.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Towards an Integrated Micro-services Architecture for Campus environments When Trust Meets the Internet of Vehicles: Opportunities, Challenges, and Future Prospects A Collaborative and Adaptive Feedback System for Physical Exercises 2021 IEEE 7th International Conference on Collaboration and Internet Computing CIC 2021 Cost-aware & Fault-tolerant Geo-distributed Edge Computing for Low-latency Stream Processing
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1