首页 > 最新文献

Network-aware Data Management最新文献

英文 中文
Network-aware data caching and prefetching for cloud-hosted metadata retrieval 用于云托管元数据检索的网络感知数据缓存和预取
Pub Date : 2013-11-17 DOI: 10.1145/2534695.2534700
Bing Zhang, Brandon Ross, Sanatkumar Tripathi, Sonali Batra, T. Kosar
With the overwhelming emergence of data-intensive applications in the Cloud, the wide-area transfer of metadata and other descriptive information about remote data is critically important for searching, indexing, and enumerating remote file system hierarchies, as well as for purposes of data transfer estimation and reservation. In this paper, we present a highly efficient network-aware caching and prefetching mechanism tailored to reduce metadata access latency and improve responsiveness in wide-area data transfers. To improve the maximum requests per second (RPS) handled by the system, we designed and implemented a network-aware prefetching service using dynamically provisioned parallel TCP streams. To improve the performance of accessing local metadata, we designed and implemented a non-blocking concurrent in-memory cache to handle unexpected bursts of requests. We have implemented the proposed mechanisms in the Directory Listing Service (DLS) system---a Cloud-hosted metadata retrieval, caching, and prefetching system, and have evaluated its performance on Amazon EC2 and XSEDE.
随着云中大量数据密集型应用程序的出现,关于远程数据的元数据和其他描述性信息的广域传输对于搜索、索引和枚举远程文件系统层次结构以及数据传输估计和保留的目的至关重要。在本文中,我们提出了一种高效的网络感知缓存和预取机制,旨在减少元数据访问延迟并提高广域数据传输的响应性。为了提高系统处理的最大每秒请求(RPS),我们设计并实现了一个使用动态配置的并行TCP流的网络感知预取服务。为了提高访问本地元数据的性能,我们设计并实现了一个非阻塞并发内存缓存来处理意外的突发请求。我们已经在目录列表服务(DLS)系统中实现了提议的机制——一个云托管的元数据检索、缓存和预取系统,并在Amazon EC2和XSEDE上评估了它的性能。
{"title":"Network-aware data caching and prefetching for cloud-hosted metadata retrieval","authors":"Bing Zhang, Brandon Ross, Sanatkumar Tripathi, Sonali Batra, T. Kosar","doi":"10.1145/2534695.2534700","DOIUrl":"https://doi.org/10.1145/2534695.2534700","url":null,"abstract":"With the overwhelming emergence of data-intensive applications in the Cloud, the wide-area transfer of metadata and other descriptive information about remote data is critically important for searching, indexing, and enumerating remote file system hierarchies, as well as for purposes of data transfer estimation and reservation. In this paper, we present a highly efficient network-aware caching and prefetching mechanism tailored to reduce metadata access latency and improve responsiveness in wide-area data transfers. To improve the maximum requests per second (RPS) handled by the system, we designed and implemented a network-aware prefetching service using dynamically provisioned parallel TCP streams. To improve the performance of accessing local metadata, we designed and implemented a non-blocking concurrent in-memory cache to handle unexpected bursts of requests. We have implemented the proposed mechanisms in the Directory Listing Service (DLS) system---a Cloud-hosted metadata retrieval, caching, and prefetching system, and have evaluated its performance on Amazon EC2 and XSEDE.","PeriodicalId":108576,"journal":{"name":"Network-aware Data Management","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126401396","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
The practical obstacles of data transfer: why researchers still love scp 数据传输的实际障碍:为什么研究人员仍然喜欢scp
Pub Date : 2013-11-17 DOI: 10.1145/2534695.2534703
H. Nam, Jason Hill, S. Parete-Koon
The importance of computing facilities is heralded every six months with the announcement of the new Top500 list, showcasing the world's fastest supercomputers. Unfortunately, with great computing capability does not come great long-term data storage capacity, which often means users must move their data to their local site archive, to remote sites where they may be doing future computation or analysis, or back to their home institution, else face the dreaded data purge that most HPC centers employ to keep utilization of large parallel filesystems low to manage performance and capacity. At HPC centers, data transfer is crucial to the scientific workflow and will increase in importance as computing systems grow in size. The Energy Sciences Network (ESnet) recently launched its fifth generation network, a 100 Gbps high-performance, unclassified national network connecting more than 40 DOE research sites to support scientific research and collaboration. Despite the tenfold increase in bandwidth to DOE research sites amenable to multiple data transfer streams and high throughput, in practice, researchers often under-utilize the network and resort to painfully-slow single stream transfer methods such as scp to avoid the complexity of using multiple stream tools such as GridFTP and bbcp, and contend with frustration from the lack of consistency of available tools between sites. In this study we survey and assess the data transfer methods provided at several DOE supported computing facilities, including both leadership-computing facilities, connected through ESnet. We present observed transfer rates, suggested optimizations, and discuss the obstacles the tools must overcome to receive wide-spread adoption over scp.
计算设施的重要性每六个月就会公布一次新的500强榜单,展示世界上最快的超级计算机。不幸的是,强大的计算能力并不能带来强大的长期数据存储能力,这通常意味着用户必须将他们的数据移动到本地站点存档,移动到远程站点,在那里他们可能会进行未来的计算或分析,或者返回到他们的家庭机构,否则将面临可怕的数据清除,大多数HPC中心都采用这种方法来保持大型并行文件系统的利用率较低,以管理性能和容量。在高性能计算中心,数据传输对科学工作流程至关重要,随着计算系统规模的扩大,数据传输的重要性也会增加。能源科学网络(ESnet)最近启动了其第五代网络,这是一个100gbps高性能、非机密的国家网络,连接了40多个能源部研究站点,以支持科学研究和合作。尽管美国能源部研究站点的带宽增加了十倍,可以适应多种数据传输流和高吞吐量,但在实践中,研究人员经常未充分利用网络,并采用缓慢的单流传输方法,如scp,以避免使用多种流工具(如GridFTP和bbcp)的复杂性,并因站点之间可用工具缺乏一致性而感到沮丧。在这项研究中,我们调查和评估了几个能源部支持的计算设施提供的数据传输方法,包括通过ESnet连接的领导计算设施。我们提出了观察到的传输速率,建议的优化,并讨论了这些工具必须克服的障碍,以便在scp上得到广泛采用。
{"title":"The practical obstacles of data transfer: why researchers still love scp","authors":"H. Nam, Jason Hill, S. Parete-Koon","doi":"10.1145/2534695.2534703","DOIUrl":"https://doi.org/10.1145/2534695.2534703","url":null,"abstract":"The importance of computing facilities is heralded every six months with the announcement of the new Top500 list, showcasing the world's fastest supercomputers. Unfortunately, with great computing capability does not come great long-term data storage capacity, which often means users must move their data to their local site archive, to remote sites where they may be doing future computation or analysis, or back to their home institution, else face the dreaded data purge that most HPC centers employ to keep utilization of large parallel filesystems low to manage performance and capacity. At HPC centers, data transfer is crucial to the scientific workflow and will increase in importance as computing systems grow in size. The Energy Sciences Network (ESnet) recently launched its fifth generation network, a 100 Gbps high-performance, unclassified national network connecting more than 40 DOE research sites to support scientific research and collaboration. Despite the tenfold increase in bandwidth to DOE research sites amenable to multiple data transfer streams and high throughput, in practice, researchers often under-utilize the network and resort to painfully-slow single stream transfer methods such as scp to avoid the complexity of using multiple stream tools such as GridFTP and bbcp, and contend with frustration from the lack of consistency of available tools between sites. In this study we survey and assess the data transfer methods provided at several DOE supported computing facilities, including both leadership-computing facilities, connected through ESnet. We present observed transfer rates, suggested optimizations, and discuss the obstacles the tools must overcome to receive wide-spread adoption over scp.","PeriodicalId":108576,"journal":{"name":"Network-aware Data Management","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132544427","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
On causes of GridFTP transfer throughput variance 关于GridFTP传输吞吐量差异的原因
Pub Date : 2013-11-17 DOI: 10.1145/2534695.2534701
Zhengyang Liu, M. Veeraraghavan, Jianhui Zhou, Jason Hick, Yee-Ting Li
In prior work, we analyzed the GridFTP usage logs collected by data transfer nodes (DTNs) located at national scientific computing centers, and found significant throughput variance even among transfers between the same two end hosts. The goal of this work is to quantify the impact of various factors on throughput variance. Our methodology consisted of executing experiments on a high-speed research testbed, running large-sized instrumented transfers between operational DTNs, and creating statistical models from collected measurements. A non-linear regression model for memory-to-memory transfer throughput as a function of CPU usage at the two DTNs and packet loss rate was created. The model is useful for determining concomitant resource allocations to use in scheduling requests. For example, if a whole NERSC DTN CPU core can be assigned to the GridFTP process executing a large memory-to-memory transfer to SLAC, then only 32% of a CPU core is required at the SLAC DTN for the corresponding GridFTP process due to a difference in the computing speeds of these two DTNs. With these CPU allocations, data can be moved at 6.3 Gbps, which sets the rate to request from the circuit scheduler.
在之前的工作中,我们分析了位于国家科学计算中心的数据传输节点(dtn)收集的GridFTP使用日志,并发现即使在相同的两台终端主机之间的传输中也存在显著的吞吐量差异。这项工作的目标是量化各种因素对吞吐量变化的影响。我们的方法包括在高速研究试验台上执行实验,在运行的dtn之间运行大型仪器传输,并根据收集的测量数据创建统计模型。创建了内存到内存传输吞吐量的非线性回归模型,该模型是两个ddn下CPU使用率和丢包率的函数。该模型对于确定在调度请求中使用的伴随资源分配非常有用。例如,如果可以将整个NERSC DTN CPU核心分配给执行大量内存到内存传输到SLAC的GridFTP进程,那么由于这两个DTN的计算速度不同,相应的GridFTP进程在SLAC DTN上只需要32%的CPU核心。有了这些CPU分配,数据可以以6.3 Gbps的速度移动,这设置了从电路调度程序请求的速率。
{"title":"On causes of GridFTP transfer throughput variance","authors":"Zhengyang Liu, M. Veeraraghavan, Jianhui Zhou, Jason Hick, Yee-Ting Li","doi":"10.1145/2534695.2534701","DOIUrl":"https://doi.org/10.1145/2534695.2534701","url":null,"abstract":"In prior work, we analyzed the GridFTP usage logs collected by data transfer nodes (DTNs) located at national scientific computing centers, and found significant throughput variance even among transfers between the same two end hosts. The goal of this work is to quantify the impact of various factors on throughput variance. Our methodology consisted of executing experiments on a high-speed research testbed, running large-sized instrumented transfers between operational DTNs, and creating statistical models from collected measurements. A non-linear regression model for memory-to-memory transfer throughput as a function of CPU usage at the two DTNs and packet loss rate was created. The model is useful for determining concomitant resource allocations to use in scheduling requests. For example, if a whole NERSC DTN CPU core can be assigned to the GridFTP process executing a large memory-to-memory transfer to SLAC, then only 32% of a CPU core is required at the SLAC DTN for the corresponding GridFTP process due to a difference in the computing speeds of these two DTNs. With these CPU allocations, data can be moved at 6.3 Gbps, which sets the rate to request from the circuit scheduler.","PeriodicalId":108576,"journal":{"name":"Network-aware Data Management","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127715178","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Evaluating I/O aware network management for scientific workflows on networked clouds 评估网络云上科学工作流的I/O感知网络管理
Pub Date : 2013-11-17 DOI: 10.1145/2534695.2534698
A. Mandal, P. Ruth, I. Baldin, Yufeng Xin, C. Castillo, M. Rynge, E. Deelman
This paper presents a performance evaluation of scientific workflows on networked cloud systems with particular emphasis on evaluating the effect of provisioned network bandwidth on application I/O performance. The experiments were run on ExoGENI, a widely distributed networked infrastructure as a service (NIaaS) testbed. ExoGENI orchestrates a federation of independent cloud sites located around the world along with backbone circuit providers. The evaluation used a representative data-intensive scientific workflow application called Montage. The application was deployed on a virtualized HTCondor environment provisioned dynamically from the ExoGENI networked cloud testbed, and managed by the Pegasus workflow manager. The results of our experiments show the effect of modifying provisioned network bandwidth on disk I/O throughput and workflow execution time. The marginal benefit as perceived by the workflow reduces as the network bandwidth allocation increases to a point where disk I/O saturates. There is little or no benefit from increasing network bandwidth beyond this inflection point. The results also underline the importance of network and I/O performance isolation for predictable application performance, and are applicable for general data-intensive workloads. Insights from this work will also be useful for real-time monitoring, application steering and infrastructure planning for data-intensive workloads on networked cloud platforms.
本文介绍了网络云系统上科学工作流的性能评估,特别强调评估预置网络带宽对应用程序I/O性能的影响。实验是在ExoGENI上进行的,ExoGENI是一个广泛分布的网络基础设施即服务(NIaaS)测试平台。ExoGENI协调了一个由世界各地的独立云站点和主干电路提供商组成的联盟。该评估使用了一个具有代表性的数据密集型科学工作流应用程序蒙太奇。该应用程序部署在一个虚拟化的HTCondor环境中,该环境由ExoGENI网络云测试平台动态提供,并由Pegasus工作流管理器管理。我们的实验结果表明,修改预置的网络带宽对磁盘I/O吞吐量和工作流执行时间的影响。当网络带宽分配增加到磁盘I/O饱和时,工作流感知到的边际收益就会减少。在这个拐点之外增加网络带宽几乎没有好处。结果还强调了网络和I/O性能隔离对于可预测的应用程序性能的重要性,并且适用于一般的数据密集型工作负载。从这项工作中获得的见解对于网络云平台上的数据密集型工作负载的实时监控、应用程序指导和基础设施规划也很有用。
{"title":"Evaluating I/O aware network management for scientific workflows on networked clouds","authors":"A. Mandal, P. Ruth, I. Baldin, Yufeng Xin, C. Castillo, M. Rynge, E. Deelman","doi":"10.1145/2534695.2534698","DOIUrl":"https://doi.org/10.1145/2534695.2534698","url":null,"abstract":"This paper presents a performance evaluation of scientific workflows on networked cloud systems with particular emphasis on evaluating the effect of provisioned network bandwidth on application I/O performance. The experiments were run on ExoGENI, a widely distributed networked infrastructure as a service (NIaaS) testbed. ExoGENI orchestrates a federation of independent cloud sites located around the world along with backbone circuit providers. The evaluation used a representative data-intensive scientific workflow application called Montage. The application was deployed on a virtualized HTCondor environment provisioned dynamically from the ExoGENI networked cloud testbed, and managed by the Pegasus workflow manager.\u0000 The results of our experiments show the effect of modifying provisioned network bandwidth on disk I/O throughput and workflow execution time. The marginal benefit as perceived by the workflow reduces as the network bandwidth allocation increases to a point where disk I/O saturates. There is little or no benefit from increasing network bandwidth beyond this inflection point. The results also underline the importance of network and I/O performance isolation for predictable application performance, and are applicable for general data-intensive workloads. Insights from this work will also be useful for real-time monitoring, application steering and infrastructure planning for data-intensive workloads on networked cloud platforms.","PeriodicalId":108576,"journal":{"name":"Network-aware Data Management","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126067750","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
期刊
Network-aware Data Management
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1