{"title":"A decentralised framework for efficient storage and processing of big data using HDFS and IPFS","authors":"F. John, S. Gopinath, E. Sherly","doi":"10.1504/ijht.2020.10034630","DOIUrl":null,"url":null,"abstract":"Big data revolution emerged with greater opportunities as well as challenges. Some of the major challenges include capturing, storing, transferring, analysing, processing and updating these large and complex datasets. Traditional data handling techniques cannot manage this fast growing data. Apache Hadoop is one of the best technologies which can address the challenges involved in big data handling. Hadoop is a centralised, distributed data storage model. InterPlanetary file system (IPFS) is an emerging technology which can provide a decentralised distributed storage. By integrating both these technologies, we can create a better framework for the distributed storage and processing of big data. In the proposed work, we formulated a model for big data placement, replication and processing by combining the features of Hadoop and IPFS. Hadoop distributed file system and IPFS jointly handle the data placement and replication tasks and the programming framework MapReduce in Hadoop handle the data processing task. The experimental result shows that the proposed framework can achieve cost-effective storage as well as faster processing of big data.","PeriodicalId":402393,"journal":{"name":"International Journal of Humanitarian Technology","volume":"2 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Humanitarian Technology","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1504/ijht.2020.10034630","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Big data revolution emerged with greater opportunities as well as challenges. Some of the major challenges include capturing, storing, transferring, analysing, processing and updating these large and complex datasets. Traditional data handling techniques cannot manage this fast growing data. Apache Hadoop is one of the best technologies which can address the challenges involved in big data handling. Hadoop is a centralised, distributed data storage model. InterPlanetary file system (IPFS) is an emerging technology which can provide a decentralised distributed storage. By integrating both these technologies, we can create a better framework for the distributed storage and processing of big data. In the proposed work, we formulated a model for big data placement, replication and processing by combining the features of Hadoop and IPFS. Hadoop distributed file system and IPFS jointly handle the data placement and replication tasks and the programming framework MapReduce in Hadoop handle the data processing task. The experimental result shows that the proposed framework can achieve cost-effective storage as well as faster processing of big data.