Model Checking as a Service using Dynamic Resource Scaling

Surya Teja Palavalasa, Yuvraj Singh, Adhish Singla, Suresh Purini, Venkatesh Choppella
{"title":"Model Checking as a Service using Dynamic Resource Scaling","authors":"Surya Teja Palavalasa, Yuvraj Singh, Adhish Singla, Suresh Purini, Venkatesh Choppella","doi":"10.1109/HiPC50609.2020.00027","DOIUrl":null,"url":null,"abstract":"Model checking is now a standard technology for verifying large and complex systems. While there are a range of tools and techniques to verify various properties of a system under consideration, in this work, we restrict our attention to safety checking procedures using explicit state space generation. The necessary hardware resources required in this approach depends on the model complexity and the resulting state transition graph that gets generated. This cannot be estimated apriori. For reasonably realistic models, the available main memory in even high end servers may not be sufficient. Hence, we have to use distributed safety verification approaches on a cluster of nodes. However, the problem of estimating the minimum number of nodes in the cluster for the verification procedure to complete successfully remains unsolved. In this paper, we propose a dynamically scalable model checker using an actor based architecture. Using the proposed approach, an end user can invoke a model checker hosted on a cloud platform in a push button fashion. Our safety verification procedures automatically expands the cluster by requesting more virtual machines from the cloud provider. Finally, the user gets to pay only for the hardware resources he rented for the duration of the verification procedure. We refer to this as Model Checking as Service. We approach this problem by proposing an asynchronous algorithm for safety checking in actor framework. The actor based approach allows for scaling the resources on a need basis and redistributes the work load transparently through state migration. We tested our approach by developing a distributed version of SpinJA model checker using Akka actor framework. We conducted our experiments on Google Cloud Engine (GCE) platform wherein we scale our resources automatically using the GCE API. On large models such as anderson.8 from BEEM benchmark suite, our approach reduced the model checking cost in dollars by 8.6x while reducing the wall clock time to complete the safety checking procedure 5.5x times.","PeriodicalId":375004,"journal":{"name":"2020 IEEE 27th International Conference on High Performance Computing, Data, and Analytics (HiPC)","volume":"96 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 IEEE 27th International Conference on High Performance Computing, Data, and Analytics (HiPC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/HiPC50609.2020.00027","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Model checking is now a standard technology for verifying large and complex systems. While there are a range of tools and techniques to verify various properties of a system under consideration, in this work, we restrict our attention to safety checking procedures using explicit state space generation. The necessary hardware resources required in this approach depends on the model complexity and the resulting state transition graph that gets generated. This cannot be estimated apriori. For reasonably realistic models, the available main memory in even high end servers may not be sufficient. Hence, we have to use distributed safety verification approaches on a cluster of nodes. However, the problem of estimating the minimum number of nodes in the cluster for the verification procedure to complete successfully remains unsolved. In this paper, we propose a dynamically scalable model checker using an actor based architecture. Using the proposed approach, an end user can invoke a model checker hosted on a cloud platform in a push button fashion. Our safety verification procedures automatically expands the cluster by requesting more virtual machines from the cloud provider. Finally, the user gets to pay only for the hardware resources he rented for the duration of the verification procedure. We refer to this as Model Checking as Service. We approach this problem by proposing an asynchronous algorithm for safety checking in actor framework. The actor based approach allows for scaling the resources on a need basis and redistributes the work load transparently through state migration. We tested our approach by developing a distributed version of SpinJA model checker using Akka actor framework. We conducted our experiments on Google Cloud Engine (GCE) platform wherein we scale our resources automatically using the GCE API. On large models such as anderson.8 from BEEM benchmark suite, our approach reduced the model checking cost in dollars by 8.6x while reducing the wall clock time to complete the safety checking procedure 5.5x times.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
使用动态资源缩放的模型检查即服务
模型检查现在是验证大型复杂系统的标准技术。虽然有一系列的工具和技术来验证所考虑的系统的各种属性,但在这项工作中,我们将注意力限制在使用显式状态空间生成的安全检查过程上。这种方法所需的必要硬件资源取决于模型复杂性和生成的最终状态转换图。这是不能先验估计的。对于比较现实的模型,即使是高端服务器中的可用主内存也可能不够用。因此,我们必须在节点集群上使用分布式安全验证方法。然而,估计验证过程成功完成的集群中最小节点数的问题仍然没有解决。在本文中,我们提出了一个动态可扩展的模型检查器,使用基于参与者的体系结构。使用建议的方法,最终用户可以以按钮方式调用托管在云平台上的模型检查器。我们的安全验证程序通过向云提供商请求更多虚拟机来自动扩展集群。最后,用户只需为他在验证过程期间租用的硬件资源付费。我们将此称为模型检查即服务。我们通过在actor框架中提出一种异步的安全检查算法来解决这个问题。基于参与者的方法允许根据需要扩展资源,并通过状态迁移透明地重新分配工作负载。我们通过使用Akka actor框架开发SpinJA模型检查器的分布式版本来测试我们的方法。我们在谷歌云引擎(GCE)平台上进行了实验,其中我们使用GCE API自动扩展资源。在大型模型上,比如安德森。我们的方法将模型检查成本(以美元计算)降低了8.6倍,同时将完成安全检查程序的挂钟时间减少了5.5倍。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
HiPC 2020 ORGANIZATION HiPC 2020 Industry Sponsors PufferFish: NUMA-Aware Work-stealing Library using Elastic Tasks Algorithms for Preemptive Co-scheduling of Kernels on GPUs 27th IEEE International Conference on High Performance Computing, Data, and Analytics (HiPC 2020) Technical program
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1