首页 > 最新文献

Proceedings of the 14th IEEE/ACM International Conference on Utility and Cloud Computing最新文献

英文 中文
QUDOS: quorum-based cloud-edge distributed DNNs for security enhanced industry 4.0 QUDOS:基于仲裁的云边缘分布式dnn,用于增强安全的工业4.0
Kevin Wallis, C. Reich, B. Varghese, C. Schindelhauer
Distributed machine learning algorithms that employ Deep Neural Networks (DNNs) are widely used in Industry 4.0 applications, such as smart manufacturing. The layers of a DNN can be mapped onto different nodes located in the cloud, edge and shop floor for preserving privacy. The quality of the data that is fed into and processed through the DNN is of utmost importance for critical tasks, such as inspection and quality control. Distributed Data Validation Networks (DDVNs) are used to validate the quality of the data. However, they are prone to single points of failure when an attack occurs. This paper proposes QUDOS, an approach that enhances the security of a distributed DNN that is supported by DDVNs using quorums. The proposed approach allows individual nodes that are corrupted due to an attack to be detected or excluded when the DNN produces an output. Metrics such as corruption factor and success probability of an attack are considered for evaluating the security aspects of DNNs. A simulation study demonstrates that if the number of corrupted nodes is less than a given threshold for decision-making in a quorum, the QUDOS approach always prevents attacks. Furthermore, the study shows that increasing the size of the quorum has a better impact on security than increasing the number of layers. One merit of QUDOS is that it enhances the security of DNNs without requiring any modifications to the algorithm and can therefore be applied to other classes of problems.
采用深度神经网络(dnn)的分布式机器学习算法广泛应用于工业4.0应用,如智能制造。深度神经网络的各层可以映射到位于云、边缘和车间的不同节点上,以保护隐私。输入并通过深度神经网络处理的数据质量对于检验和质量控制等关键任务至关重要。分布式数据验证网络(DDVNs)用于验证数据的质量。然而,当攻击发生时,它们容易出现单点故障。本文提出了一种基于quorum的分布式DNN的安全性增强方法——QUDOS。所提出的方法允许在DNN产生输出时检测或排除由于攻击而损坏的单个节点。在评估dnn的安全性方面,考虑了诸如损坏因子和攻击成功概率等指标。仿真研究表明,如果损坏节点的数量小于quorum中给定的决策阈值,QUDOS方法总是可以防止攻击。此外,研究表明,增加quorum的大小比增加层数对安全性有更好的影响。QUDOS的一个优点是它增强了dnn的安全性,而不需要对算法进行任何修改,因此可以应用于其他类型的问题。
{"title":"QUDOS: quorum-based cloud-edge distributed DNNs for security enhanced industry 4.0","authors":"Kevin Wallis, C. Reich, B. Varghese, C. Schindelhauer","doi":"10.1145/3468737.3494094","DOIUrl":"https://doi.org/10.1145/3468737.3494094","url":null,"abstract":"Distributed machine learning algorithms that employ Deep Neural Networks (DNNs) are widely used in Industry 4.0 applications, such as smart manufacturing. The layers of a DNN can be mapped onto different nodes located in the cloud, edge and shop floor for preserving privacy. The quality of the data that is fed into and processed through the DNN is of utmost importance for critical tasks, such as inspection and quality control. Distributed Data Validation Networks (DDVNs) are used to validate the quality of the data. However, they are prone to single points of failure when an attack occurs. This paper proposes QUDOS, an approach that enhances the security of a distributed DNN that is supported by DDVNs using quorums. The proposed approach allows individual nodes that are corrupted due to an attack to be detected or excluded when the DNN produces an output. Metrics such as corruption factor and success probability of an attack are considered for evaluating the security aspects of DNNs. A simulation study demonstrates that if the number of corrupted nodes is less than a given threshold for decision-making in a quorum, the QUDOS approach always prevents attacks. Furthermore, the study shows that increasing the size of the quorum has a better impact on security than increasing the number of layers. One merit of QUDOS is that it enhances the security of DNNs without requiring any modifications to the algorithm and can therefore be applied to other classes of problems.","PeriodicalId":254382,"journal":{"name":"Proceedings of the 14th IEEE/ACM International Conference on Utility and Cloud Computing","volume":"102 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129911938","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Predictive auto-scaling with OpenStack Monasca 使用OpenStack Monasca进行预测自动伸缩
Giacomo Lanciano, Filippo Galli, T. Cucinotta, D. Bacciu, A. Passarella
Cloud auto-scaling mechanisms are typically based on reactive automation rules that scale a cluster whenever some metric, e.g., the average CPU usage among instances, exceeds a predefined threshold. Tuning these rules becomes particularly cumbersome when scaling-up a cluster involves non-negligible times to bootstrap new instances, as it happens frequently in production cloud services. To deal with this problem, we propose an architecture for auto-scaling cloud services based on the status in which the system is expected to evolve in the near future. Our approach leverages on time-series forecasting techniques, like those based on machine learning and artificial neural networks, to predict the future dynamics of key metrics, e.g., resource consumption metrics, and apply a threshold-based scaling policy on them. The result is a predictive automation policy that is able, for instance, to automatically anticipate peaks in the load of a cloud application and trigger ahead of time appropriate scaling actions to accommodate the expected increase in traffic. We prototyped our approach as an open-source OpenStack component, which relies on, and extends, the monitoring capabilities offered by Monasca, resulting in the addition of predictive metrics that can be leveraged by orchestration components like Heat or Senlin. We show experimental results using a recurrent neural network and a multi-layer perceptron as predictor, which are compared with a simple linear regression and a traditional non-predictive auto-scaling policy. However, the proposed framework allows for the easy customization of the prediction policy as needed.
云自动扩展机制通常基于响应式自动化规则,当某些指标(例如,实例之间的平均CPU使用率)超过预定义的阈值时,该规则就会扩展集群。当扩展集群涉及启动新实例的不可忽略的时间时,调优这些规则变得特别麻烦,因为这种情况在生产云服务中经常发生。为了解决这个问题,我们提出了一种基于系统在不久的将来预期发展的状态的自动扩展云服务架构。我们的方法利用时间序列预测技术,如基于机器学习和人工神经网络的预测技术,来预测关键指标(如资源消耗指标)的未来动态,并对其应用基于阈值的缩放策略。结果是一个预测性自动化策略,例如,它能够自动预测云应用程序负载的峰值,并提前触发适当的扩展操作,以适应预期的流量增长。我们将我们的方法原型化为一个开源OpenStack组件,它依赖并扩展了Monasca提供的监控功能,从而增加了可被编排组件(如Heat或Senlin)利用的预测指标。我们展示了使用递归神经网络和多层感知器作为预测器的实验结果,并将其与简单的线性回归和传统的非预测自缩放策略进行了比较。然而,建议的框架允许根据需要轻松定制预测策略。
{"title":"Predictive auto-scaling with OpenStack Monasca","authors":"Giacomo Lanciano, Filippo Galli, T. Cucinotta, D. Bacciu, A. Passarella","doi":"10.1145/3468737.3494104","DOIUrl":"https://doi.org/10.1145/3468737.3494104","url":null,"abstract":"Cloud auto-scaling mechanisms are typically based on reactive automation rules that scale a cluster whenever some metric, e.g., the average CPU usage among instances, exceeds a predefined threshold. Tuning these rules becomes particularly cumbersome when scaling-up a cluster involves non-negligible times to bootstrap new instances, as it happens frequently in production cloud services. To deal with this problem, we propose an architecture for auto-scaling cloud services based on the status in which the system is expected to evolve in the near future. Our approach leverages on time-series forecasting techniques, like those based on machine learning and artificial neural networks, to predict the future dynamics of key metrics, e.g., resource consumption metrics, and apply a threshold-based scaling policy on them. The result is a predictive automation policy that is able, for instance, to automatically anticipate peaks in the load of a cloud application and trigger ahead of time appropriate scaling actions to accommodate the expected increase in traffic. We prototyped our approach as an open-source OpenStack component, which relies on, and extends, the monitoring capabilities offered by Monasca, resulting in the addition of predictive metrics that can be leveraged by orchestration components like Heat or Senlin. We show experimental results using a recurrent neural network and a multi-layer perceptron as predictor, which are compared with a simple linear regression and a traditional non-predictive auto-scaling policy. However, the proposed framework allows for the easy customization of the prediction policy as needed.","PeriodicalId":254382,"journal":{"name":"Proceedings of the 14th IEEE/ACM International Conference on Utility and Cloud Computing","volume":"59 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121280213","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
System-aware dynamic partitioning for batch and streaming workloads 用于批处理和流工作负载的系统感知动态分区
Zoltan Zvara, Péter G. N. Szabó, Bal'azs Barnab'as L'or'ant, Andr'as A. Bencz'ur
When processing data streams with highly skewed and nonstationary key distributions, we often observe overloaded partitions when the hash partitioning fails to balance data correctly. To avoid slow tasks that delay the completion of the whole stage of computation, it is necessary to apply adaptive, on-the-fly partitioning that continuously recomputes an optimal partitioner, given the observed key distribution. While such solutions exist for batch processing of static data sets and stateless stream processing, the task is difficult for long-running stateful streaming jobs where key distribution changes over time. Careful checkpointing and operator state migration is necessary to change the partitioning while the operation is running. Our key result is a lightweight on-the-fly Dynamic Repartitioning (DR) module for distributed data processing systems (DDPS), including Apache Spark and Flink, which improves the performance with negligible overhead. DR can adaptively repartition data during execution using our Key Isolator Partitioner (KIP). In our experiments with real workloads and power-law distributions, we reach a speedup of 1.5-6 for a variety of Spark and Flink jobs.
在处理具有高度倾斜和非平稳键分布的数据流时,当散列分区无法正确平衡数据时,我们经常观察到分区过载。为了避免延迟整个计算阶段完成的缓慢任务,有必要应用自适应的动态分区,在给定观察到的键分布的情况下,不断重新计算最优分区器。虽然此类解决方案适用于静态数据集的批处理和无状态流处理,但对于密钥分布随时间变化的长时间运行的有状态流作业来说,这项任务很困难。要在操作运行时更改分区,需要仔细的检查点和操作符状态迁移。我们的关键成果是一个轻量级的动态动态重分区(DR)模块,用于分布式数据处理系统(DDPS),包括Apache Spark和Flink,它以微不足道的开销提高了性能。DR可以在执行期间使用我们的密钥隔离分区器(KIP)自适应地重新分区数据。在我们对真实工作负载和幂律分布的实验中,对于各种Spark和Flink作业,我们达到了1.5-6的加速。
{"title":"System-aware dynamic partitioning for batch and streaming workloads","authors":"Zoltan Zvara, Péter G. N. Szabó, Bal'azs Barnab'as L'or'ant, Andr'as A. Bencz'ur","doi":"10.1145/3468737.3494087","DOIUrl":"https://doi.org/10.1145/3468737.3494087","url":null,"abstract":"When processing data streams with highly skewed and nonstationary key distributions, we often observe overloaded partitions when the hash partitioning fails to balance data correctly. To avoid slow tasks that delay the completion of the whole stage of computation, it is necessary to apply adaptive, on-the-fly partitioning that continuously recomputes an optimal partitioner, given the observed key distribution. While such solutions exist for batch processing of static data sets and stateless stream processing, the task is difficult for long-running stateful streaming jobs where key distribution changes over time. Careful checkpointing and operator state migration is necessary to change the partitioning while the operation is running. Our key result is a lightweight on-the-fly Dynamic Repartitioning (DR) module for distributed data processing systems (DDPS), including Apache Spark and Flink, which improves the performance with negligible overhead. DR can adaptively repartition data during execution using our Key Isolator Partitioner (KIP). In our experiments with real workloads and power-law distributions, we reach a speedup of 1.5-6 for a variety of Spark and Flink jobs.","PeriodicalId":254382,"journal":{"name":"Proceedings of the 14th IEEE/ACM International Conference on Utility and Cloud Computing","volume":"372 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115987024","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
期刊
Proceedings of the 14th IEEE/ACM International Conference on Utility and Cloud Computing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1