{"title":"An archive-based method for efficiently handling small file problems in HDFS","authors":"Junnan Liu, Shengyi Jin, Dong Wang, Han Li","doi":"10.1002/cpe.8260","DOIUrl":null,"url":null,"abstract":"<div>\n \n <p>Hadoop distributed file system (HDFS) performs well when storing and managing large files. However, its performance significantly decreases when dealing with massive small files. In response to this problem, a novel archive-based solution is proposed. The archive refers to merging multiple small files into larger data files, which can effectively reduce the memory usage of the NameNode. The current archive-based solutions have the disadvantages of long access time, long archive construction time, and no support for storage, updating and deleting small files in the archive system. Our method utilizes a dynamic hash function to distribute the metadata of small files across multiple metadata files. We construct a primary index that combines dynamic and static indexes for these metadata files. Regarding data files, include some read-only files and one readable–writable file. A small file's contents are written into a readable and writable file. Upon reaching a predetermined threshold, the readable–writable file transitions into read-only status, with a fresh readable–writable file replacing it. Experimental results show that the scheme improves the efficiency of archive access and archive creation and is more efficient than the original HDFS storage and update efficiency.</p>\n </div>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"36 24","pages":""},"PeriodicalIF":1.5000,"publicationDate":"2024-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Concurrency and Computation-Practice & Experience","FirstCategoryId":"94","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/cpe.8260","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, SOFTWARE ENGINEERING","Score":null,"Total":0}
引用次数: 0
Abstract
Hadoop distributed file system (HDFS) performs well when storing and managing large files. However, its performance significantly decreases when dealing with massive small files. In response to this problem, a novel archive-based solution is proposed. The archive refers to merging multiple small files into larger data files, which can effectively reduce the memory usage of the NameNode. The current archive-based solutions have the disadvantages of long access time, long archive construction time, and no support for storage, updating and deleting small files in the archive system. Our method utilizes a dynamic hash function to distribute the metadata of small files across multiple metadata files. We construct a primary index that combines dynamic and static indexes for these metadata files. Regarding data files, include some read-only files and one readable–writable file. A small file's contents are written into a readable and writable file. Upon reaching a predetermined threshold, the readable–writable file transitions into read-only status, with a fresh readable–writable file replacing it. Experimental results show that the scheme improves the efficiency of archive access and archive creation and is more efficient than the original HDFS storage and update efficiency.
期刊介绍:
Concurrency and Computation: Practice and Experience (CCPE) publishes high-quality, original research papers, and authoritative research review papers, in the overlapping fields of:
Parallel and distributed computing;
High-performance computing;
Computational and data science;
Artificial intelligence and machine learning;
Big data applications, algorithms, and systems;
Network science;
Ontologies and semantics;
Security and privacy;
Cloud/edge/fog computing;
Green computing; and
Quantum computing.