{"title":"CUDA-Based Parallel Implementation of IBM Word Alignment Algorithm for Statistical Machine Translation","authors":"Siyuan Jing, Gaorong Yan, Xingyuan Chen, Peng Jin, Zhaoyi Guo","doi":"10.1109/PDCAT.2016.050","DOIUrl":null,"url":null,"abstract":"Word alignment is a basic task in natural language processing and it usually serves as the starting point when building a modern statistical machine translation system. However, the state-of-art parallel algorithm for word alignment is still time-consuming. In this work, we explore a parallel implementation of word alignment algorithm on Graphics Processor Unit (GPU), which has been widely available in the field of high performance computing. We use the Compute Unified Device Architecture (CUDA) programming model to re-implement a state-of-the-art word alignment algorithm, called IBM Expectation-Maximization (EM) algorithm. A Tesla K40M card with 2880 cores is used for experiments and execution times obtained with the proposed algorithm are compared with a sequential algorithm and a multi-threads algorithm on an IBM X3850 server, which has two Intel Xeon E7 CPUs (2.0GHz * 10 cores). The best experimental results show a 16.8-fold speedup compared to the multi-threads algorithm and a 234.7-fold speedup compared to the sequential algorithm.","PeriodicalId":203925,"journal":{"name":"2016 17th International Conference on Parallel and Distributed Computing, Applications and Technologies (PDCAT)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2016 17th International Conference on Parallel and Distributed Computing, Applications and Technologies (PDCAT)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/PDCAT.2016.050","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
Word alignment is a basic task in natural language processing and it usually serves as the starting point when building a modern statistical machine translation system. However, the state-of-art parallel algorithm for word alignment is still time-consuming. In this work, we explore a parallel implementation of word alignment algorithm on Graphics Processor Unit (GPU), which has been widely available in the field of high performance computing. We use the Compute Unified Device Architecture (CUDA) programming model to re-implement a state-of-the-art word alignment algorithm, called IBM Expectation-Maximization (EM) algorithm. A Tesla K40M card with 2880 cores is used for experiments and execution times obtained with the proposed algorithm are compared with a sequential algorithm and a multi-threads algorithm on an IBM X3850 server, which has two Intel Xeon E7 CPUs (2.0GHz * 10 cores). The best experimental results show a 16.8-fold speedup compared to the multi-threads algorithm and a 234.7-fold speedup compared to the sequential algorithm.