Parallelization of the Trinity Pipeline for De Novo Transcriptome Assembly

Vipin Sachdeva, C. Kim, K. E. Jordan, M. Winn
{"title":"Parallelization of the Trinity Pipeline for De Novo Transcriptome Assembly","authors":"Vipin Sachdeva, C. Kim, K. E. Jordan, M. Winn","doi":"10.1109/IPDPSW.2014.67","DOIUrl":null,"url":null,"abstract":"This paper details a distributed-memory implementation of Chrysalis, part of the popular Trinity workflow used for de novo transcripto me assembly. We have implemented changes to Chrysalis, which was previously multi-threaded for shared-memory architectures, to change it to a hybrid implementation which uses both MPI and OpenMP. With the new hybrid implementation, we report speedups of about a factor of twenty for both Graph From Fasta and Reads To Transcripts on an iDataPlex cluster for a sugar beet dataset containing around 130 million reads. Along with the hybrid implementation, we also use PyFasta to speed up Bowtie execution by a factor of three which is also part of the Trinity workflow. Overall, we reduce the runtime of the Chrysalis step of the Trinity workflow from over 50 hours to less than 5 hours for the sugar beet dataset. By enabling the use of multi-node clusters, this implementation is a significant step towards making de novo transcripto me assembly feasible for ever bigger transcripto me datasets.","PeriodicalId":153864,"journal":{"name":"2014 IEEE International Parallel & Distributed Processing Symposium Workshops","volume":"40 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2014-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2014 IEEE International Parallel & Distributed Processing Symposium Workshops","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IPDPSW.2014.67","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 5

Abstract

This paper details a distributed-memory implementation of Chrysalis, part of the popular Trinity workflow used for de novo transcripto me assembly. We have implemented changes to Chrysalis, which was previously multi-threaded for shared-memory architectures, to change it to a hybrid implementation which uses both MPI and OpenMP. With the new hybrid implementation, we report speedups of about a factor of twenty for both Graph From Fasta and Reads To Transcripts on an iDataPlex cluster for a sugar beet dataset containing around 130 million reads. Along with the hybrid implementation, we also use PyFasta to speed up Bowtie execution by a factor of three which is also part of the Trinity workflow. Overall, we reduce the runtime of the Chrysalis step of the Trinity workflow from over 50 hours to less than 5 hours for the sugar beet dataset. By enabling the use of multi-node clusters, this implementation is a significant step towards making de novo transcripto me assembly feasible for ever bigger transcripto me datasets.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
从头转录组组装Trinity流水线的并行化
本文详细介绍了Chrysalis的分布式内存实现,它是用于从头转录汇编的流行Trinity工作流的一部分。我们已经对Chrysalis进行了一些修改,它以前是多线程的共享内存架构,将其更改为使用MPI和OpenMP的混合实现。使用新的混合实现,我们报告在iDataPlex集群上,对于包含约1.3亿次读取的甜菜数据集,Graph From Fasta和Reads To Transcripts的速度都提高了大约20倍。除了混合实现,我们还使用PyFasta将Bowtie的执行速度提高了三倍,这也是Trinity工作流程的一部分。总的来说,我们将Trinity工作流的Chrysalis步骤的运行时间从50多个小时减少到甜菜数据集的不到5个小时。通过支持使用多节点集群,这种实现是朝着使更大的转录数据集的从头组装可行迈出的重要一步。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
A New Parallel Algorithm for Two-Pass Connected Component Labeling RAW Introduction and Committees HPDIC Introduction and Committees An Evaluation of User Satisfaction Driven Scheduling in a Polymorphic Embedded System HPGC Introduction and Committees
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1