Pub Date : 2019-08-01DOI: 10.1109/NVMSA.2019.8863521
Chi-Hsing Chang, Che-Wei Chang
Non-volatile memory (NVM), such as phase change memory (PCM), can be a promising candidate to replace DRAM because of its lower leakage power and higher density. Since PCM is non-volatile, it can also be used as storage to support in-place execution and reduce loading time. However, as conventional operating systems have different strategies to satisfy various constraints on memory and storage subsystems, using PCM as both memory and storage in a system requires thorough consideration on the system’s inherent constraints, such as limited lifetime, retention time requirements, and possible overheads. Most existing work still divide NVM into separated memory and storage parts, but this strategy still incurs the overhead of loading data from storage to memory as in conventional systems. In our work, we rethink the data retention time requirements for PCM memory/storage and develop an adaptive memory-storage management strategy to dynamically reconfigure the One-Memory System, with considerations of the current average write-cycle and the number of retention-time qualified frames for storage, to reduce the extra data movement between memory and storage with a limited lifetime sacrifice. Experimental results show that our adaptive design improves the performance by reducing 86.1% of the extra writes of data movement, and only 3.4% of the system’s lifetime is sacrificed.
{"title":"Adaptive Memory and Storage Fusion on Non-Volatile One-Memory System","authors":"Chi-Hsing Chang, Che-Wei Chang","doi":"10.1109/NVMSA.2019.8863521","DOIUrl":"https://doi.org/10.1109/NVMSA.2019.8863521","url":null,"abstract":"Non-volatile memory (NVM), such as phase change memory (PCM), can be a promising candidate to replace DRAM because of its lower leakage power and higher density. Since PCM is non-volatile, it can also be used as storage to support in-place execution and reduce loading time. However, as conventional operating systems have different strategies to satisfy various constraints on memory and storage subsystems, using PCM as both memory and storage in a system requires thorough consideration on the system’s inherent constraints, such as limited lifetime, retention time requirements, and possible overheads. Most existing work still divide NVM into separated memory and storage parts, but this strategy still incurs the overhead of loading data from storage to memory as in conventional systems. In our work, we rethink the data retention time requirements for PCM memory/storage and develop an adaptive memory-storage management strategy to dynamically reconfigure the One-Memory System, with considerations of the current average write-cycle and the number of retention-time qualified frames for storage, to reduce the extra data movement between memory and storage with a limited lifetime sacrifice. Experimental results show that our adaptive design improves the performance by reducing 86.1% of the extra writes of data movement, and only 3.4% of the system’s lifetime is sacrificed.","PeriodicalId":438544,"journal":{"name":"2019 IEEE Non-Volatile Memory Systems and Applications Symposium (NVMSA)","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127282838","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-08-01DOI: 10.1109/NVMSA.2019.8863525
Y. Ho, Chun-Feng Wu, Ming-Chang Yang, Tseng-Yi Chen, Yuan-Hao Chang
Random forest is effective and accurate in making predictions for classification and regression problems, which constitute the majority of machine learning applications or systems nowadays. However, as the data are being generated explosively in this big data era, many machine learning algorithms, including the random forest algorithm, may face the difficulty in maintaining and processing all the required data in the main memory. Instead, intensive data movements (i.e., data swappings) between the faster-but-smaller main memory and the slowerbut-larger secondary storage may occur excessively and largely degrade the performance. To address this challenge, the emerging non-volatile memory (NVM) technologies are placed great hopes to substitute the traditional random access memory (RAM) and to build a larger-than-ever main memory space because of its higher cell density, lower power consumption, and comparable read performance as traditional RAM. Nevertheless, the limited write endurance of NVM cells and the read-write asymmetry of NVMs may still limit the feasibility of performing machine learning algorithms directly on NVMs. Such dilemma inspires this study to develop an NVM-friendly bagging strategy for the random forest algorithm, in order to trade the “randomness” of the sampled data for the reduced data movements in the memory hierarchy without hurting the prediction accuracy. The evaluation results show that the proposed design could save up to 72% of the write accesses on the representative traces with nearly no degradation on the prediction accuracy.
{"title":"Replanting Your Forest: NVM-friendly Bagging Strategy for Random Forest","authors":"Y. Ho, Chun-Feng Wu, Ming-Chang Yang, Tseng-Yi Chen, Yuan-Hao Chang","doi":"10.1109/NVMSA.2019.8863525","DOIUrl":"https://doi.org/10.1109/NVMSA.2019.8863525","url":null,"abstract":"Random forest is effective and accurate in making predictions for classification and regression problems, which constitute the majority of machine learning applications or systems nowadays. However, as the data are being generated explosively in this big data era, many machine learning algorithms, including the random forest algorithm, may face the difficulty in maintaining and processing all the required data in the main memory. Instead, intensive data movements (i.e., data swappings) between the faster-but-smaller main memory and the slowerbut-larger secondary storage may occur excessively and largely degrade the performance. To address this challenge, the emerging non-volatile memory (NVM) technologies are placed great hopes to substitute the traditional random access memory (RAM) and to build a larger-than-ever main memory space because of its higher cell density, lower power consumption, and comparable read performance as traditional RAM. Nevertheless, the limited write endurance of NVM cells and the read-write asymmetry of NVMs may still limit the feasibility of performing machine learning algorithms directly on NVMs. Such dilemma inspires this study to develop an NVM-friendly bagging strategy for the random forest algorithm, in order to trade the “randomness” of the sampled data for the reduced data movements in the memory hierarchy without hurting the prediction accuracy. The evaluation results show that the proposed design could save up to 72% of the write accesses on the representative traces with nearly no degradation on the prediction accuracy.","PeriodicalId":438544,"journal":{"name":"2019 IEEE Non-Volatile Memory Systems and Applications Symposium (NVMSA)","volume":"87 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126230276","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}