用于作业级计算的数据库缓存

H. Chiang, Ting-Han Wei, I-Chen Wu
{"title":"用于作业级计算的数据库缓存","authors":"H. Chiang, Ting-Han Wei, I-Chen Wu","doi":"10.1109/TAAI.2016.7880170","DOIUrl":null,"url":null,"abstract":"This paper improves upon Job-Level (JL) computing, a general distributed computing approach. In JL computing, a client maintains the overall search tree and parcels the overall search into coarse-grained jobs, which are then each calculated by pre-existing game-playing programs. In order to support large-scale problems such as solving 7×7 killall-Go, or building opening books for 9×9 Go or Connect6, JL computing is modified so that the entire search tree is stored in a database, as opposed to simply being stored in the client process' memory. However, the time cost of accessing this database becomes a bottleneck on performance when using a large number of computing resources. This paper proposes a cache mechanism for JL search trees. Instead of the previous approach, where the entire search tree is stored in the database, we maintain parts of the search tree in the memory of the client process to reduce the number of database accesses. Our method significantly improves the performance of job operations. Assuming that each job requires 30 seconds on average, the JL application with this cache mechanism can allow for the use of 5036 distributed computing resources in parallel without database accesses becoming the performance bottleneck.","PeriodicalId":159858,"journal":{"name":"2016 Conference on Technologies and Applications of Artificial Intelligence (TAAI)","volume":"55 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Database caching for job-level computing\",\"authors\":\"H. Chiang, Ting-Han Wei, I-Chen Wu\",\"doi\":\"10.1109/TAAI.2016.7880170\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This paper improves upon Job-Level (JL) computing, a general distributed computing approach. In JL computing, a client maintains the overall search tree and parcels the overall search into coarse-grained jobs, which are then each calculated by pre-existing game-playing programs. In order to support large-scale problems such as solving 7×7 killall-Go, or building opening books for 9×9 Go or Connect6, JL computing is modified so that the entire search tree is stored in a database, as opposed to simply being stored in the client process' memory. However, the time cost of accessing this database becomes a bottleneck on performance when using a large number of computing resources. This paper proposes a cache mechanism for JL search trees. Instead of the previous approach, where the entire search tree is stored in the database, we maintain parts of the search tree in the memory of the client process to reduce the number of database accesses. Our method significantly improves the performance of job operations. Assuming that each job requires 30 seconds on average, the JL application with this cache mechanism can allow for the use of 5036 distributed computing resources in parallel without database accesses becoming the performance bottleneck.\",\"PeriodicalId\":159858,\"journal\":{\"name\":\"2016 Conference on Technologies and Applications of Artificial Intelligence (TAAI)\",\"volume\":\"55 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"1900-01-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2016 Conference on Technologies and Applications of Artificial Intelligence (TAAI)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/TAAI.2016.7880170\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2016 Conference on Technologies and Applications of Artificial Intelligence (TAAI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/TAAI.2016.7880170","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

本文对作业级(Job-Level, JL)计算这一通用的分布式计算方法进行了改进。在JL计算中,客户端维护整个搜索树,并将整个搜索打包成粗粒度的作业,然后由预先存在的游戏程序计算每个作业。为了支持大规模的问题,例如解决7×7 kill -Go,或者为9×9 Go或Connect6构建打开账本,JL计算被修改,以便将整个搜索树存储在数据库中,而不是简单地存储在客户端进程的内存中。但是,当使用大量计算资源时,访问该数据库的时间成本成为性能的瓶颈。提出了一种面向JL搜索树的缓存机制。与之前将整个搜索树存储在数据库中的方法不同,我们在客户端进程的内存中维护部分搜索树,以减少数据库访问次数。我们的方法显著提高了作业操作的性能。假设每个作业平均需要30秒,那么具有这种缓存机制的JL应用程序可以允许并行使用5036个分布式计算资源,而不会使数据库访问成为性能瓶颈。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Database caching for job-level computing
This paper improves upon Job-Level (JL) computing, a general distributed computing approach. In JL computing, a client maintains the overall search tree and parcels the overall search into coarse-grained jobs, which are then each calculated by pre-existing game-playing programs. In order to support large-scale problems such as solving 7×7 killall-Go, or building opening books for 9×9 Go or Connect6, JL computing is modified so that the entire search tree is stored in a database, as opposed to simply being stored in the client process' memory. However, the time cost of accessing this database becomes a bottleneck on performance when using a large number of computing resources. This paper proposes a cache mechanism for JL search trees. Instead of the previous approach, where the entire search tree is stored in the database, we maintain parts of the search tree in the memory of the client process to reduce the number of database accesses. Our method significantly improves the performance of job operations. Assuming that each job requires 30 seconds on average, the JL application with this cache mechanism can allow for the use of 5036 distributed computing resources in parallel without database accesses becoming the performance bottleneck.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
A cluster-based opinion leader discovery in social network User behavior analysis and commodity recommendation for point-earning apps Extraction of proper names from myanmar text using latent dirichlet allocation Heuristic algorithm for target coverage with connectivity fault-tolerance problem in wireless sensor networks AFIS: Aligning detail-pages for full schema induction
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1