首页 > 最新文献

2010 IEEE Fifth International Conference on Networking, Architecture, and Storage最新文献

英文 中文
Concentric Layout, a New Scientific Data Distribution Scheme in Hadoop File System 同心布局,一种新的Hadoop文件系统科学数据分布方案
Lu Cheng, Pengju Shang, S. Sehrish, Grant Mackey, Jun Wang
The data generated by scientific simulation, sensor, monitor or optical telescope has increased with dramatic speed. In order to analyze the raw data fast and space efficiently, data pre-process operation is needed to achieve better performance in data analysis phase. Current research shows an increasing tread of adopting MapReduce framework for large scale data processing. However, the data access patterns which generally applied to scientific data set are not supported by current MapReduce framework directly. The gap between the requirement from analytics application and the property of MapReduce framework motivates us to provide support for these data access patterns in MapReduce framework. In our work, we studied the data access patterns in matrix files and proposed a new concentric data layout solution to facilitate matrix data access and analysis in MapReduce framework. Concentric data layout is a hierarchical data layout which maintains the dimensional property in large data sets. Contrary to the continuous data layout adopted in current Hadoop framework, concentric data layout stores the data from the same sub-matrix into one chunk, and then stores chunks symmetrically in a higher level. This matches well with the matrix like computation. The concentric data layout preprocesses the data beforehand, and optimizes the afterward run of MapReduce application. The experiments show that the concentric data layout improves the overall performance, reduces the execution time by about 38% when reading a 64 GB file. It also mitigates the unused data read overhead and increases the useful data efficiency by 32% on average.
科学模拟、传感器、监视器或光学望远镜产生的数据以惊人的速度增长。为了快速有效地分析原始数据,在数据分析阶段需要进行数据预处理操作,以获得更好的性能。目前的研究表明,采用MapReduce框架进行大规模数据处理的趋势越来越明显。然而,通常应用于科学数据集的数据访问模式,目前的MapReduce框架并不直接支持。分析应用的需求和MapReduce框架的特性之间的差距促使我们在MapReduce框架中提供对这些数据访问模式的支持。在本文的工作中,我们研究了矩阵文件中的数据访问模式,并提出了一种新的同心数据布局解决方案,以方便MapReduce框架下矩阵数据的访问和分析。同心数据布局是一种在大型数据集中保持维度属性的分层数据布局。与当前Hadoop框架采用的连续数据布局不同,同心数据布局将同一子矩阵中的数据存储到一个块中,然后在更高的层次上对称存储块。这与矩阵式计算很匹配。同心数据布局对数据进行预先预处理,优化MapReduce应用的后续运行。实验表明,同心数据布局提高了整体性能,在读取64gb文件时减少了约38%的执行时间。它还减少了未使用的数据读取开销,并将有用数据的效率平均提高了32%。
{"title":"Concentric Layout, a New Scientific Data Distribution Scheme in Hadoop File System","authors":"Lu Cheng, Pengju Shang, S. Sehrish, Grant Mackey, Jun Wang","doi":"10.1109/NAS.2010.59","DOIUrl":"https://doi.org/10.1109/NAS.2010.59","url":null,"abstract":"The data generated by scientific simulation, sensor, monitor or optical telescope has increased with dramatic speed. In order to analyze the raw data fast and space efficiently, data pre-process operation is needed to achieve better performance in data analysis phase. Current research shows an increasing tread of adopting MapReduce framework for large scale data processing. However, the data access patterns which generally applied to scientific data set are not supported by current MapReduce framework directly. The gap between the requirement from analytics application and the property of MapReduce framework motivates us to provide support for these data access patterns in MapReduce framework. In our work, we studied the data access patterns in matrix files and proposed a new concentric data layout solution to facilitate matrix data access and analysis in MapReduce framework. Concentric data layout is a hierarchical data layout which maintains the dimensional property in large data sets. Contrary to the continuous data layout adopted in current Hadoop framework, concentric data layout stores the data from the same sub-matrix into one chunk, and then stores chunks symmetrically in a higher level. This matches well with the matrix like computation. The concentric data layout preprocesses the data beforehand, and optimizes the afterward run of MapReduce application. The experiments show that the concentric data layout improves the overall performance, reduces the execution time by about 38% when reading a 64 GB file. It also mitigates the unused data read overhead and increases the useful data efficiency by 32% on average.","PeriodicalId":284549,"journal":{"name":"2010 IEEE Fifth International Conference on Networking, Architecture, and Storage","volume":"103 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115455510","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
An ESB Based Micro-scale Urban Air Quality Monitoring System 基于ESB的微尺度城市空气质量监测系统
Tu Quach Ngoc, Jonghyun Lee, Kyung Jun Gil, Karpjoo Jeong, S. Lim
In this paper, we present a novel approach to micro-scale air quality monitoring for urban areas. This approach is based on two major technologies: wireless sensor networks (WSN) and service-oriented architecture (SOA). We discuss technical issues such as architectural designs, system integration, and user interfaces. We present a prototype system developed for the Konkuk University which uses an Enterprise Service Bus (ESB) system called ServiceMix.
本文提出了一种城市微尺度空气质量监测的新方法。这种方法基于两种主要技术:无线传感器网络(WSN)和面向服务的体系结构(SOA)。我们讨论诸如架构设计、系统集成和用户界面等技术问题。我们提出了一个为建国大学开发的原型系统,它使用了一个名为ServiceMix的企业服务总线(ESB)系统。
{"title":"An ESB Based Micro-scale Urban Air Quality Monitoring System","authors":"Tu Quach Ngoc, Jonghyun Lee, Kyung Jun Gil, Karpjoo Jeong, S. Lim","doi":"10.1109/NAS.2010.60","DOIUrl":"https://doi.org/10.1109/NAS.2010.60","url":null,"abstract":"In this paper, we present a novel approach to micro-scale air quality monitoring for urban areas. This approach is based on two major technologies: wireless sensor networks (WSN) and service-oriented architecture (SOA). We discuss technical issues such as architectural designs, system integration, and user interfaces. We present a prototype system developed for the Konkuk University which uses an Enterprise Service Bus (ESB) system called ServiceMix.","PeriodicalId":284549,"journal":{"name":"2010 IEEE Fifth International Conference on Networking, Architecture, and Storage","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125158926","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Towards Fast De-duplication Using Low Energy Coprocessor 利用低能耗协处理器实现快速重复数据删除
Liang Ma, Caijun Zhen, Bin Zhao, Jingwei Ma, G. Wang, X. Liu
Backup technology based on data de-duplication has become a hot topic in nowadays. In order to get a better performance, traditional research is mainly focused on decreasing the disk access time. In this paper, we consider computing complexity problem in data de-duplication system, and try to improve system performance by reducing computing time. We put computing tasks on commodity coprocessor to speed up the computing process. Compared with general-purpose processors, commodity coprocessors have lower energy consumption and lower cost. Experimental results show that they have equal or even better performance compared with general-purpose processors.
基于重复数据删除的备份技术已成为当前研究的热点。为了获得更好的性能,传统的研究主要集中在减少磁盘访问时间上。本文考虑了重复数据删除系统中的计算复杂度问题,并尝试通过减少计算时间来提高系统性能。我们将计算任务放在商用协处理器上,以加快计算过程。与通用处理器相比,商用协处理器具有更低的能耗和更低的成本。实验结果表明,它们具有与通用处理器相当甚至更好的性能。
{"title":"Towards Fast De-duplication Using Low Energy Coprocessor","authors":"Liang Ma, Caijun Zhen, Bin Zhao, Jingwei Ma, G. Wang, X. Liu","doi":"10.1109/NAS.2010.29","DOIUrl":"https://doi.org/10.1109/NAS.2010.29","url":null,"abstract":"Backup technology based on data de-duplication has become a hot topic in nowadays. In order to get a better performance, traditional research is mainly focused on decreasing the disk access time. In this paper, we consider computing complexity problem in data de-duplication system, and try to improve system performance by reducing computing time. We put computing tasks on commodity coprocessor to speed up the computing process. Compared with general-purpose processors, commodity coprocessors have lower energy consumption and lower cost. Experimental results show that they have equal or even better performance compared with general-purpose processors.","PeriodicalId":284549,"journal":{"name":"2010 IEEE Fifth International Conference on Networking, Architecture, and Storage","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126096072","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
DAM: A DataOwnership-Aware Multi-layered De-duplication Scheme DAM:一种数据所有权感知的多层重复数据删除方案
Yujuan Tan, D. Feng, Zhichao Yan, Guohui Zhou
Beyond the storage savings brought by chunk-level de-duplication in backup and archiving systems, a prominent challenge facing this technology is how to efficiently and effectively identify the duplicate chunks. Most of the chunk fingerprints used to identify individual chunks are stored on disks due to the limited main memory capacity. Checking for chunk fingerprint match on disk for every input chunk is known to be a severe performance bottleneck for the backup process. On the other hand, our intuitions and analyses of real backup data both indicate that duplicate chunks tend to strongly concentrate according to the data ownership. Motivated by this observation and to avoid or alleviate the aforementioned backup performance bottleneck, we propose DAM, a dataownership-aware multi-layered de-duplication scheme that exploits the data chunks’ ownership and uses a tri-layered de-duplication approach to narrow the search space for duplicate chunks to reduce the total disk accesses. Our experimental results with real world datasets on DAM show it reduces the disk accesses by an average of 60.8% and shortens the de-duplication time by an average of 46.3%.
在备份和归档系统中,除了块级重复数据删除带来的存储节省之外,该技术面临的一个突出挑战是如何高效地识别重复的块。由于主内存容量有限,大多数用于识别单个块的块指纹都存储在磁盘上。对于备份过程来说,检查每个输入块在磁盘上的块指纹匹配是一个严重的性能瓶颈。另一方面,我们的直觉和对真实备份数据的分析都表明,根据数据所有权,重复块倾向于强烈集中。基于这一观察结果,为了避免或缓解上述备份性能瓶颈,我们提出了DAM,这是一种数据所有权感知的多层重复数据删除方案,它利用数据块的所有权,并使用三层重复数据删除方法来缩小重复数据块的搜索空间,以减少磁盘访问总量。我们在DAM上对真实数据集的实验结果表明,它平均减少了60.8%的磁盘访问,平均缩短了46.3%的重复数据删除时间。
{"title":"DAM: A DataOwnership-Aware Multi-layered De-duplication Scheme","authors":"Yujuan Tan, D. Feng, Zhichao Yan, Guohui Zhou","doi":"10.1109/NAS.2010.57","DOIUrl":"https://doi.org/10.1109/NAS.2010.57","url":null,"abstract":"Beyond the storage savings brought by chunk-level de-duplication in backup and archiving systems, a prominent challenge facing this technology is how to efficiently and effectively identify the duplicate chunks. Most of the chunk fingerprints used to identify individual chunks are stored on disks due to the limited main memory capacity. Checking for chunk fingerprint match on disk for every input chunk is known to be a severe performance bottleneck for the backup process. On the other hand, our intuitions and analyses of real backup data both indicate that duplicate chunks tend to strongly concentrate according to the data ownership. Motivated by this observation and to avoid or alleviate the aforementioned backup performance bottleneck, we propose DAM, a dataownership-aware multi-layered de-duplication scheme that exploits the data chunks’ ownership and uses a tri-layered de-duplication approach to narrow the search space for duplicate chunks to reduce the total disk accesses. Our experimental results with real world datasets on DAM show it reduces the disk accesses by an average of 60.8% and shortens the de-duplication time by an average of 46.3%.","PeriodicalId":284549,"journal":{"name":"2010 IEEE Fifth International Conference on Networking, Architecture, and Storage","volume":"464 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123928352","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Modelling Speculative Prefetching for Hybrid Storage Systems 混合存储系统的推测预取建模
Mais Nijim
Parallel storage systems have been highly scalable and widely used in support of data-intensive applications. In future systems with the nature of massive data processing and storing, hybrid storage systems opt for a solution to fulfill a variety of demands such as large storage capacity, high I/O performance and low cost. Hybrid storage systems (HSS) contain both high-end storage components (e.g. solid-state disks and hard disk drives) to guarantee performance, and low-end storage components (e.g. tapes) to reduce cost. In HSS, transferring data back and forth among solid-state disks (SSDs), hard disk drives (HDDs), and tapes plays a critical role in achieving high I/O performance. Prefetching is a promising solution to reduce the latency of data transferring in HSS. However, prefetching in the context of HSS is technically challenging due to an interesting dilemma: aggressive prefetching is required to efficiently reduce I/O latency, whereas overaggressive prefetching may waste I/O bandwidth by transferring useless data from HDDs to SSDs or from tapes to HDDs. To address this problem, we propose a multi-layer prefetching algorithm that can speculatively prefetch data from tapes to HDDs and from HDDs to SSDs. To evaluate our algorithm, we develop an analytical model and the experimental results reveal that our prefetching algorithm improves the performance in hybrid storage systems.
并行存储系统具有高度可扩展性,并广泛用于支持数据密集型应用程序。在未来具有海量数据处理和存储性质的系统中,混合存储系统选择满足大存储容量、高I/O性能和低成本等多种需求的解决方案。混合存储系统(HSS)既包含保证性能的高端存储组件(如固态磁盘和硬盘驱动器),也包含降低成本的低端存储组件(如磁带)。在HSS中,数据在ssd (solid-state disk)、hdd (hard disk drives)和磁带之间的来回传输对实现高I/O性能起着至关重要的作用。预取是降低HSS数据传输延迟的一种很有前途的解决方案。然而,HSS环境中的预取在技术上具有挑战性,因为存在一个有趣的难题:需要主动预取来有效地减少I/O延迟,而过度预取可能会将无用的数据从hdd传输到ssd或从磁带传输到hdd,从而浪费I/O带宽。为了解决这个问题,我们提出了一种多层预取算法,可以推测地从磁带预取数据到hdd和从hdd预取数据到ssd。为了评估我们的算法,我们建立了一个分析模型,实验结果表明我们的预取算法提高了混合存储系统的性能。
{"title":"Modelling Speculative Prefetching for Hybrid Storage Systems","authors":"Mais Nijim","doi":"10.1109/NAS.2010.27","DOIUrl":"https://doi.org/10.1109/NAS.2010.27","url":null,"abstract":"Parallel storage systems have been highly scalable and widely used in support of data-intensive applications. In future systems with the nature of massive data processing and storing, hybrid storage systems opt for a solution to fulfill a variety of demands such as large storage capacity, high I/O performance and low cost. Hybrid storage systems (HSS) contain both high-end storage components (e.g. solid-state disks and hard disk drives) to guarantee performance, and low-end storage components (e.g. tapes) to reduce cost. In HSS, transferring data back and forth among solid-state disks (SSDs), hard disk drives (HDDs), and tapes plays a critical role in achieving high I/O performance. Prefetching is a promising solution to reduce the latency of data transferring in HSS. However, prefetching in the context of HSS is technically challenging due to an interesting dilemma: aggressive prefetching is required to efficiently reduce I/O latency, whereas overaggressive prefetching may waste I/O bandwidth by transferring useless data from HDDs to SSDs or from tapes to HDDs. To address this problem, we propose a multi-layer prefetching algorithm that can speculatively prefetch data from tapes to HDDs and from HDDs to SSDs. To evaluate our algorithm, we develop an analytical model and the experimental results reveal that our prefetching algorithm improves the performance in hybrid storage systems.","PeriodicalId":284549,"journal":{"name":"2010 IEEE Fifth International Conference on Networking, Architecture, and Storage","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114814472","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
A Trust Aware Grid Access Control Architecture Based on ABAC 基于ABAC的信任感知网格访问控制体系结构
Tiezhu Zhao, Shoubin Dong
Grid system has many great security challenges such as access control. The attribute-based access control model (ABAC) has much merits that are more flexible, fine-grained and dynamically suitable to grid environment. As an important factor in grid security, trust is increasingly applied to management of security, especially in access control. This paper puts forward a novel trust model in multi-domain grid environment and trust factor was originally introduced into access control architecture of grid to extend classic ABAC model. By extending the authorization architecture of XACML, extended ABAC based access control architecture for grid was submitted. In our experiment, the increase and decrease of trust are non-symmetrical and the trust model is sensitive to the malicious attacks. It can effectively control the trust change of different nodes and the trust model can reduce effectively the damage of vicious attack.
网格系统存在着访问控制等诸多安全难题。基于属性的访问控制模型(ABAC)具有灵活、细粒度、动态适应网格环境等优点。信任作为网格安全的一个重要因素,越来越多地应用于安全管理,特别是访问控制。本文提出了一种新的多域网格环境下的信任模型,并在网格访问控制体系结构中引入信任因子,对经典的ABAC模型进行了扩展。通过对XACML授权体系结构的扩展,提出了基于ABAC的网格访问控制扩展体系结构。在我们的实验中,信任的增减是非对称的,信任模型对恶意攻击很敏感。它可以有效地控制不同节点之间的信任变化,信任模型可以有效地减少恶意攻击的损害。
{"title":"A Trust Aware Grid Access Control Architecture Based on ABAC","authors":"Tiezhu Zhao, Shoubin Dong","doi":"10.1109/NAS.2010.18","DOIUrl":"https://doi.org/10.1109/NAS.2010.18","url":null,"abstract":"Grid system has many great security challenges such as access control. The attribute-based access control model (ABAC) has much merits that are more flexible, fine-grained and dynamically suitable to grid environment. As an important factor in grid security, trust is increasingly applied to management of security, especially in access control. This paper puts forward a novel trust model in multi-domain grid environment and trust factor was originally introduced into access control architecture of grid to extend classic ABAC model. By extending the authorization architecture of XACML, extended ABAC based access control architecture for grid was submitted. In our experiment, the increase and decrease of trust are non-symmetrical and the trust model is sensitive to the malicious attacks. It can effectively control the trust change of different nodes and the trust model can reduce effectively the damage of vicious attack.","PeriodicalId":284549,"journal":{"name":"2010 IEEE Fifth International Conference on Networking, Architecture, and Storage","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130763091","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
A MAP Fitting Approach with Joint Approximation Oriented to the Dynamic Resource Provisioning in Shared Data Centres 面向共享数据中心动态资源配置的联合逼近MAP拟合方法
Xiuwen Wang, Haiping Qu, Lu Xu, Xiaoming Han, Jiangang Zhang
In shared data centres, accurate models of workloads are indispensable in the process of autonomic resource scheduling. Facing the problem of parameterizing the vast space of big MAPs in order to fit the real workload traces with time-varying characteristics, in this paper we propose a MAP fitting approach JAMC with joint approximation of the order moment and the lag correlation. Based on the state-of-the-art fitting method KPC, JAMC uses a similar divide and conquer approach to simplify the fitting problem and uses optimization to explore the best solution. Our experiments show that JAMC is simple and sufficient enough to effectively predict the behavior of the queueing systems, and the fitting time cost of a few minutes is acceptable for shared data center. Through the analysis of the sensitivity to the orders fitted, we deduce that it is not the case that the higher orders have better results. In the case of Bellcore Aug89, the appropriate fitted orders for the moments and autocorrelations should be respectively on a set of 10~20 and 10000~30000.
在共享数据中心中,准确的工作负载模型是实现资源自主调度的必要条件。为了拟合具有时变特征的实际工作轨迹,需要对大型MAP空间进行参数化,本文提出了一种阶矩与滞后相关联合逼近的MAP拟合方法JAMC。基于最先进的拟合方法KPC, JAMC使用类似的分而治之的方法来简化拟合问题,并使用优化方法来探索最佳解。我们的实验表明,JAMC方法简单,足以有效地预测排队系统的行为,并且几分钟的拟合时间成本对于共享数据中心来说是可以接受的。通过对拟合阶数的敏感性分析,我们推导出并非阶数越高结果越好。在Bellcore Aug89的情况下,矩和自相关的适当拟合顺序应分别在10~20和10000~30000的集合上。
{"title":"A MAP Fitting Approach with Joint Approximation Oriented to the Dynamic Resource Provisioning in Shared Data Centres","authors":"Xiuwen Wang, Haiping Qu, Lu Xu, Xiaoming Han, Jiangang Zhang","doi":"10.1109/NAS.2010.39","DOIUrl":"https://doi.org/10.1109/NAS.2010.39","url":null,"abstract":"In shared data centres, accurate models of workloads are indispensable in the process of autonomic resource scheduling. Facing the problem of parameterizing the vast space of big MAPs in order to fit the real workload traces with time-varying characteristics, in this paper we propose a MAP fitting approach JAMC with joint approximation of the order moment and the lag correlation. Based on the state-of-the-art fitting method KPC, JAMC uses a similar divide and conquer approach to simplify the fitting problem and uses optimization to explore the best solution. Our experiments show that JAMC is simple and sufficient enough to effectively predict the behavior of the queueing systems, and the fitting time cost of a few minutes is acceptable for shared data center. Through the analysis of the sensitivity to the orders fitted, we deduce that it is not the case that the higher orders have better results. In the case of Bellcore Aug89, the appropriate fitted orders for the moments and autocorrelations should be respectively on a set of 10~20 and 10000~30000.","PeriodicalId":284549,"journal":{"name":"2010 IEEE Fifth International Conference on Networking, Architecture, and Storage","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125788256","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
A Low Cost and Inner-round Pipelined Design of ECB-AES-256 Crypto Engine for Solid State Disk 固态硬盘用ECB-AES-256加密引擎的低成本内圆流水线设计
Fei Wu, Liang Wang, Ji-guang Wan
Solid-State Disks (SSD) are widely used in government and security departments owing to its faster speed of data access, more durability, more shock and drop, no noise, lower power consumption, lighter weight compared with Magnetic disk. As a result, the demand of security for storing data has been generated. The Advanced Encryption Standard (AES) is today's key data encryption standard for protecting data, but the implementation of high-speed AES encryption engine needs to consume a large number of hardware resources. This paper presents a low-cost and inner-round pipelined ECB-256-AES encryption engine. Through sharing the resources between the AES encryption module and the AES decryption module and using the look-up table for the SubBytes and InvSubBytes operations, the logic resources have been largely reduced; by using loop rolling and inner-round pipelined techniques, a high throughput of encryption and decryption operations is achieved. A 1.986Gbits/s throughput and 232.748MHz clock frequency are achieved using 614 slices of the Xilinx xc6slx45-3fgg484. The simulation results show that the AES crypto design is able to meet the read and write speed of SATA 1.0 interface.
固态硬盘(Solid-State Disks, SSD)具有数据存取速度快、耐用、防跌落、无噪音、功耗低、重量轻等优点,被广泛应用于政府和安全部门。因此,对数据存储的安全性提出了要求。高级加密标准AES (Advanced Encryption Standard)是当今用于保护数据的关键数据加密标准,但实现高速AES加密引擎需要消耗大量硬件资源。本文提出了一种低成本的内轮流水线式ECB-256-AES加密引擎。通过在AES加密模块和AES解密模块之间共享资源,并使用查找表进行SubBytes和InvSubBytes操作,大大减少了逻辑资源;通过循环滚动和内轮流水线技术,实现了高吞吐量的加解密操作。使用Xilinx xc6slx45-3fgg484芯片的614片可以实现1.986Gbits/s的吞吐量和232.748MHz的时钟频率。仿真结果表明,AES加密设计能够满足SATA 1.0接口的读写速度。
{"title":"A Low Cost and Inner-round Pipelined Design of ECB-AES-256 Crypto Engine for Solid State Disk","authors":"Fei Wu, Liang Wang, Ji-guang Wan","doi":"10.1109/NAS.2010.40","DOIUrl":"https://doi.org/10.1109/NAS.2010.40","url":null,"abstract":"Solid-State Disks (SSD) are widely used in government and security departments owing to its faster speed of data access, more durability, more shock and drop, no noise, lower power consumption, lighter weight compared with Magnetic disk. As a result, the demand of security for storing data has been generated. The Advanced Encryption Standard (AES) is today's key data encryption standard for protecting data, but the implementation of high-speed AES encryption engine needs to consume a large number of hardware resources. This paper presents a low-cost and inner-round pipelined ECB-256-AES encryption engine. Through sharing the resources between the AES encryption module and the AES decryption module and using the look-up table for the SubBytes and InvSubBytes operations, the logic resources have been largely reduced; by using loop rolling and inner-round pipelined techniques, a high throughput of encryption and decryption operations is achieved. A 1.986Gbits/s throughput and 232.748MHz clock frequency are achieved using 614 slices of the Xilinx xc6slx45-3fgg484. The simulation results show that the AES crypto design is able to meet the read and write speed of SATA 1.0 interface.","PeriodicalId":284549,"journal":{"name":"2010 IEEE Fifth International Conference on Networking, Architecture, and Storage","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127538725","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
A Simple Group Key Management Approach for Mobile Ad Hoc Networks 移动自组网中一种简单的组密钥管理方法
Bing Wu, Yuhong Dong
Securing communications among a group of nodes in mobile ad hoc networks (MANETs) is challenging due to the lack of trusted infrastructure. Group key management is one of the basic building blocks in securing group communications. A group key is a common secret used in cryptographic algorithms. Group key management involves creating and distributing the common secret for all group members. Change of membership requires the group key being refreshed to ensure backward and forward secrecy. In this paper, we extend our previous work with new protocols. Our basic idea is that each group member does not need to order intermediate keys and can deduce the group key locally. A multicast tree is formed for efficient and reliable message dissemination.*****
由于缺乏可信的基础设施,在移动自组织网络(manet)中保护一组节点之间的通信是具有挑战性的。组密钥管理是保护组通信的基本组成部分之一。组密钥是加密算法中常用的秘密。组密钥管理包括为所有组成员创建和分发公共密钥。更改成员关系需要刷新组密钥,以确保向后和向前保密。在本文中,我们用新的协议扩展了我们以前的工作。我们的基本思想是,每个组成员不需要订购中间键,可以在本地推断出组键。为了高效、可靠地传播消息,形成组播树。*****
{"title":"A Simple Group Key Management Approach for Mobile Ad Hoc Networks","authors":"Bing Wu, Yuhong Dong","doi":"10.1109/NAS.2010.20","DOIUrl":"https://doi.org/10.1109/NAS.2010.20","url":null,"abstract":"Securing communications among a group of nodes in mobile ad hoc networks (MANETs) is challenging due to the lack of trusted infrastructure. Group key management is one of the basic building blocks in securing group communications. A group key is a common secret used in cryptographic algorithms. Group key management involves creating and distributing the common secret for all group members. Change of membership requires the group key being refreshed to ensure backward and forward secrecy. In this paper, we extend our previous work with new protocols. Our basic idea is that each group member does not need to order intermediate keys and can deduce the group key locally. A multicast tree is formed for efficient and reliable message dissemination.*****","PeriodicalId":284549,"journal":{"name":"2010 IEEE Fifth International Conference on Networking, Architecture, and Storage","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122455612","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
RAF: A Random Access First Cache Management to Improve SSD-Based Disk Cache 随机存取优先缓存管理改进基于ssd的磁盘缓存
Yang Liu, Jianzhong Huang, C. Xie, Q. Cao
Offering better performance for random access compared to conventional hard disks and providing larger capacity and lower cost than DRAM, NAND flash based SSDsare integrated in server storage hierarchy as a second tier of disk cache between DRAM and disks for caching more data from disks to meet the increasingly intensive I/O demands. Unfortunately, available hybrid storage architectures cannot fully exploit SSDs’ potentials due to absorbing too much workload of disk tier, which results in excessive wear and performance degradation associated with internel garbage collection. In this paper, we propose RAF (Random Access First), an hybrid storage architecture that combines both of an SSD based disk cache and a disk drive subsystem. RAF focuses on extending the lifetime of SSD while improving system performance through providing priority to caching random-access data. In detail, RAF splits flash cache into read and write cache to service read/write requests respectively. Read cache only holds random-access data that are evicted from file cache to reduce flash wear and write hits. Write cache performs as a circular write-through log so as to improve system response time and simplify garbage collection. Similar to read cache, write cache only caches random-access data and flushes them to hard disks immediately. Note that, sequential access are serviced by hard disks directly to even the full workload between SSD and disk storage. RAF is implemented in Linux kernel 2.6.30.10. The results of experiments show that RAF can significantly reduce flash wear and improve performance compared with the state-of-art FlashCache architecture.
与传统硬盘相比,基于NAND闪存的ssd具有更好的随机访问性能,比DRAM具有更大的容量和更低的成本,它集成在服务器存储层次中,作为DRAM和磁盘之间的第二层磁盘缓存,用于缓存来自磁盘的更多数据,以满足日益密集的I/O需求。不幸的是,现有的混合存储架构无法充分利用ssd的潜力,因为它吸收了太多的磁盘层工作负载,从而导致过度的磨损和与内部垃圾收集相关的性能下降。在本文中,我们提出了RAF (Random Access First),这是一种混合存储架构,结合了基于SSD的磁盘缓存和磁盘驱动器子系统。RAF的重点是延长SSD的生命周期,同时通过提供缓存随机访问数据的优先级来提高系统性能。RAF将闪存缓存拆分为读缓存和写缓存,分别为读/写请求提供服务。读缓存只保存从文件缓存中取出的随机访问数据,以减少闪存磨损和写命中。写缓存作为循环的透写日志执行,从而提高系统响应时间,简化垃圾收集。与读缓存类似,写缓存只缓存随机访问的数据,并立即将其刷新到硬盘。请注意,顺序访问由硬盘直接提供,即使是SSD和磁盘存储之间的全部工作负载。RAF在Linux内核2.6.30.10中实现。实验结果表明,与目前最先进的FlashCache架构相比,RAF可以显著减少闪存磨损,提高性能。
{"title":"RAF: A Random Access First Cache Management to Improve SSD-Based Disk Cache","authors":"Yang Liu, Jianzhong Huang, C. Xie, Q. Cao","doi":"10.1109/NAS.2010.9","DOIUrl":"https://doi.org/10.1109/NAS.2010.9","url":null,"abstract":"Offering better performance for random access compared to conventional hard disks and providing larger capacity and lower cost than DRAM, NAND flash based SSDsare integrated in server storage hierarchy as a second tier of disk cache between DRAM and disks for caching more data from disks to meet the increasingly intensive I/O demands. Unfortunately, available hybrid storage architectures cannot fully exploit SSDs’ potentials due to absorbing too much workload of disk tier, which results in excessive wear and performance degradation associated with internel garbage collection. In this paper, we propose RAF (Random Access First), an hybrid storage architecture that combines both of an SSD based disk cache and a disk drive subsystem. RAF focuses on extending the lifetime of SSD while improving system performance through providing priority to caching random-access data. In detail, RAF splits flash cache into read and write cache to service read/write requests respectively. Read cache only holds random-access data that are evicted from file cache to reduce flash wear and write hits. Write cache performs as a circular write-through log so as to improve system response time and simplify garbage collection. Similar to read cache, write cache only caches random-access data and flushes them to hard disks immediately. Note that, sequential access are serviced by hard disks directly to even the full workload between SSD and disk storage. RAF is implemented in Linux kernel 2.6.30.10. The results of experiments show that RAF can significantly reduce flash wear and improve performance compared with the state-of-art FlashCache architecture.","PeriodicalId":284549,"journal":{"name":"2010 IEEE Fifth International Conference on Networking, Architecture, and Storage","volume":"62 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121101679","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 19
期刊
2010 IEEE Fifth International Conference on Networking, Architecture, and Storage
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1