Moysis Symeonides, Demetris Trihinas, Joanna Georgiou, Michalis Kasioulis, G. Pallis, M. Dikaiakos, Theodoros Toliopoulos, A. Michailidou, A. Gounaris
{"title":"Demo: The RAINBOW Analytics Stack for the Fog Continuum","authors":"Moysis Symeonides, Demetris Trihinas, Joanna Georgiou, Michalis Kasioulis, G. Pallis, M. Dikaiakos, Theodoros Toliopoulos, A. Michailidou, A. Gounaris","doi":"10.1109/ISCC55528.2022.9913026","DOIUrl":null,"url":null,"abstract":"With the proliferation of raw Internet of Things (IoTs) data, Fog Computing is emerging as a computing paradigm for delay-sensitive streaming analytics with operators deploying big data distributed engines on Fog resources [1]. Nevertheless, the current (Cloud-based) distributed analytics solutions are unaware of the unique characteristics of Fog realms. For instance, task placement algorithms consider homogeneous underlying resources without considering the Fog nodes' heterogeneity and the non-uniform network connections, resulting in sub-optimal processing performance. Moreover, data quality can play an important role, where corrupted data, and network uncertainty may lead to less useful results. In turn, energy consumption can critically impact the overall cost and liveness of the underlying processing infrastructure. Specifically, scheduling tasks on nodes with energy-hungry profiles or battery-powered devices may temporarily be beneficial for the performance, but it may increase the overall cost, or/and the battery-powered devices may not be available when needed. A Fog-enabled analytics stack must allow users to optimize Fog-specific indicators or trade-offs among them. For instance, users may sacrifice a portion of the execution performance to minimize energy consumption or vice versa. Except for the performance issues raised by Fog, the state-of-the-art distributed processing engines offer only low-level procedural programming interfaces with operators facing a steep learning curve to master them. So, query abstractions are crucial for minimizing the deployment time, errors, and debugging.","PeriodicalId":309606,"journal":{"name":"2022 IEEE Symposium on Computers and Communications (ISCC)","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2022-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE Symposium on Computers and Communications (ISCC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ISCC55528.2022.9913026","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
With the proliferation of raw Internet of Things (IoTs) data, Fog Computing is emerging as a computing paradigm for delay-sensitive streaming analytics with operators deploying big data distributed engines on Fog resources [1]. Nevertheless, the current (Cloud-based) distributed analytics solutions are unaware of the unique characteristics of Fog realms. For instance, task placement algorithms consider homogeneous underlying resources without considering the Fog nodes' heterogeneity and the non-uniform network connections, resulting in sub-optimal processing performance. Moreover, data quality can play an important role, where corrupted data, and network uncertainty may lead to less useful results. In turn, energy consumption can critically impact the overall cost and liveness of the underlying processing infrastructure. Specifically, scheduling tasks on nodes with energy-hungry profiles or battery-powered devices may temporarily be beneficial for the performance, but it may increase the overall cost, or/and the battery-powered devices may not be available when needed. A Fog-enabled analytics stack must allow users to optimize Fog-specific indicators or trade-offs among them. For instance, users may sacrifice a portion of the execution performance to minimize energy consumption or vice versa. Except for the performance issues raised by Fog, the state-of-the-art distributed processing engines offer only low-level procedural programming interfaces with operators facing a steep learning curve to master them. So, query abstractions are crucial for minimizing the deployment time, errors, and debugging.