分布式云存储网络的内存访问调度

R. Rojas-Cessa, L. Cai, T. Kijkanjanarat
{"title":"分布式云存储网络的内存访问调度","authors":"R. Rojas-Cessa, L. Cai, T. Kijkanjanarat","doi":"10.1109/WOCC.2012.6198152","DOIUrl":null,"url":null,"abstract":"Memory-access speed continues falling behind the growing speeds of network transmission links. High-speed network links provide a means to connect memory placed in hosts, located in different corners of the network. These hosts are called storage system units (SSUs), where data can be stored. Cloud storage provided with a single server can facilitate large amounts of storage to a user, however, at low access speeds. A distributed approach to cloud storage is an attractive solution. In a distributed cloud, small high-speed memories at SSUs can potentially increase the memory access speed for data processing and transmission. However, the latencies of each SSUs may be different. Therefore, the selection of SSUs impacts the overall memory access speed. This paper proposes a latency-aware scheduling scheme to access data from SSUs. This scheme determines the minimum latency requirement for a given dataset and selects available SSUs with the required latencies. Furthermore, because the latencies of some selected SSUs may be large, the proposed scheme notifies SSUs in advance of the expected time to perform data access. The simulation results show that the proposed scheme achieves faster access speeds than a scheme that randomly selects SSUs and another hat greedily selects SSUs with small latencies.","PeriodicalId":118220,"journal":{"name":"2012 21st Annual Wireless and Optical Communications Conference (WOCC)","volume":"118 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2012-04-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":"{\"title\":\"Scheduling memory access on a distributed cloud storage network\",\"authors\":\"R. Rojas-Cessa, L. Cai, T. Kijkanjanarat\",\"doi\":\"10.1109/WOCC.2012.6198152\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Memory-access speed continues falling behind the growing speeds of network transmission links. High-speed network links provide a means to connect memory placed in hosts, located in different corners of the network. These hosts are called storage system units (SSUs), where data can be stored. Cloud storage provided with a single server can facilitate large amounts of storage to a user, however, at low access speeds. A distributed approach to cloud storage is an attractive solution. In a distributed cloud, small high-speed memories at SSUs can potentially increase the memory access speed for data processing and transmission. However, the latencies of each SSUs may be different. Therefore, the selection of SSUs impacts the overall memory access speed. This paper proposes a latency-aware scheduling scheme to access data from SSUs. This scheme determines the minimum latency requirement for a given dataset and selects available SSUs with the required latencies. Furthermore, because the latencies of some selected SSUs may be large, the proposed scheme notifies SSUs in advance of the expected time to perform data access. The simulation results show that the proposed scheme achieves faster access speeds than a scheme that randomly selects SSUs and another hat greedily selects SSUs with small latencies.\",\"PeriodicalId\":118220,\"journal\":{\"name\":\"2012 21st Annual Wireless and Optical Communications Conference (WOCC)\",\"volume\":\"118 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2012-04-19\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"6\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2012 21st Annual Wireless and Optical Communications Conference (WOCC)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/WOCC.2012.6198152\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2012 21st Annual Wireless and Optical Communications Conference (WOCC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/WOCC.2012.6198152","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 6

摘要

内存访问速度继续落后于网络传输链路的增长速度。高速网络链路提供了一种连接位于网络不同角落的主机中的存储器的手段。这些主机被称为存储系统单元(ssu),用于存储数据。但是,单个服务器提供的云存储可以在低访问速度下为用户提供大量存储。分布式的云存储方法是一个很有吸引力的解决方案。在分布式云中,ssu上的小型高速内存可以潜在地提高数据处理和传输的内存访问速度。但是,每个ssu的延迟可能不同。因此,ssu的选择会影响整体内存访问速度。本文提出了一种延迟感知调度方案,用于从单节点访问数据。该方案确定给定数据集的最小延迟需求,并选择具有所需延迟的可用ssu。此外,由于某些选定的ssu的延迟可能很大,因此建议的方案提前通知ssu执行数据访问的预期时间。仿真结果表明,该方案比随机选择ssu和贪婪选择时延较小的ssu的方案具有更快的访问速度。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Scheduling memory access on a distributed cloud storage network
Memory-access speed continues falling behind the growing speeds of network transmission links. High-speed network links provide a means to connect memory placed in hosts, located in different corners of the network. These hosts are called storage system units (SSUs), where data can be stored. Cloud storage provided with a single server can facilitate large amounts of storage to a user, however, at low access speeds. A distributed approach to cloud storage is an attractive solution. In a distributed cloud, small high-speed memories at SSUs can potentially increase the memory access speed for data processing and transmission. However, the latencies of each SSUs may be different. Therefore, the selection of SSUs impacts the overall memory access speed. This paper proposes a latency-aware scheduling scheme to access data from SSUs. This scheme determines the minimum latency requirement for a given dataset and selects available SSUs with the required latencies. Furthermore, because the latencies of some selected SSUs may be large, the proposed scheme notifies SSUs in advance of the expected time to perform data access. The simulation results show that the proposed scheme achieves faster access speeds than a scheme that randomly selects SSUs and another hat greedily selects SSUs with small latencies.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Waveguide crossings by use of mutlimode tapered structures Photonic Ultra-wide-band doublet pulse using tapered-directional coupler integrated electroabsorption modulator A new scheme of low-cost TO-based butterfly-type laser module packaging Cross-layer design for mobile multimedia Dispersion monitoring in high-speed optical communication systems
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1