Tamon Sadasue, Takuya Tanaka, Ryosuke Kasahara, Arief Darmawan, T. Isshiki
{"title":"用于快速梯度增强树训练的可扩展硬件架构","authors":"Tamon Sadasue, Takuya Tanaka, Ryosuke Kasahara, Arief Darmawan, T. Isshiki","doi":"10.2197/ipsjtsldm.14.11","DOIUrl":null,"url":null,"abstract":": Gradient Boosted Tree is a powerful machine learning method that supports both classification and regres- sion, and is widely used in fields requiring high-precision prediction, particularly for various types of tabular data sets. Owing to the recent increase in data size, the number of attributes, and the demand for frequent model updates, a fast and e ffi cient training is required. FPGA is suitable for acceleration with power e ffi ciency because it can realize a domain specific hardware architecture; however it is necessary to flexibly support many hyper-parameters to adapt to various dataset sizes, dataset properties, and system limitations such as memory capacity and logic capacity. We introduce a fully pipelined hardware implementation of Gradient Boosted Tree training and a design framework that enables a versatile hardware system description with high performance and flexibility to realize highly parameterized machine learning models. Experimental results show that our FPGA implementation achieves a 11- to 33-times faster performance and more than 300-times higher power e ffi ciency than a state-of-the-art GPU accelerated software implementation.","PeriodicalId":38964,"journal":{"name":"IPSJ Transactions on System LSI Design Methodology","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Scalable Hardware Architecture for fast Gradient Boosted Tree Training\",\"authors\":\"Tamon Sadasue, Takuya Tanaka, Ryosuke Kasahara, Arief Darmawan, T. Isshiki\",\"doi\":\"10.2197/ipsjtsldm.14.11\",\"DOIUrl\":null,\"url\":null,\"abstract\":\": Gradient Boosted Tree is a powerful machine learning method that supports both classification and regres- sion, and is widely used in fields requiring high-precision prediction, particularly for various types of tabular data sets. Owing to the recent increase in data size, the number of attributes, and the demand for frequent model updates, a fast and e ffi cient training is required. FPGA is suitable for acceleration with power e ffi ciency because it can realize a domain specific hardware architecture; however it is necessary to flexibly support many hyper-parameters to adapt to various dataset sizes, dataset properties, and system limitations such as memory capacity and logic capacity. We introduce a fully pipelined hardware implementation of Gradient Boosted Tree training and a design framework that enables a versatile hardware system description with high performance and flexibility to realize highly parameterized machine learning models. Experimental results show that our FPGA implementation achieves a 11- to 33-times faster performance and more than 300-times higher power e ffi ciency than a state-of-the-art GPU accelerated software implementation.\",\"PeriodicalId\":38964,\"journal\":{\"name\":\"IPSJ Transactions on System LSI Design Methodology\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-01-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IPSJ Transactions on System LSI Design Methodology\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.2197/ipsjtsldm.14.11\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q4\",\"JCRName\":\"Engineering\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IPSJ Transactions on System LSI Design Methodology","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.2197/ipsjtsldm.14.11","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"Engineering","Score":null,"Total":0}
Scalable Hardware Architecture for fast Gradient Boosted Tree Training
: Gradient Boosted Tree is a powerful machine learning method that supports both classification and regres- sion, and is widely used in fields requiring high-precision prediction, particularly for various types of tabular data sets. Owing to the recent increase in data size, the number of attributes, and the demand for frequent model updates, a fast and e ffi cient training is required. FPGA is suitable for acceleration with power e ffi ciency because it can realize a domain specific hardware architecture; however it is necessary to flexibly support many hyper-parameters to adapt to various dataset sizes, dataset properties, and system limitations such as memory capacity and logic capacity. We introduce a fully pipelined hardware implementation of Gradient Boosted Tree training and a design framework that enables a versatile hardware system description with high performance and flexibility to realize highly parameterized machine learning models. Experimental results show that our FPGA implementation achieves a 11- to 33-times faster performance and more than 300-times higher power e ffi ciency than a state-of-the-art GPU accelerated software implementation.