FAST: A framework for high-performance medical image computing and visualization

E. Smistad
{"title":"FAST: A framework for high-performance medical image computing and visualization","authors":"E. Smistad","doi":"10.1145/3456669.3456717","DOIUrl":null,"url":null,"abstract":"Medical image processing and visualization is often computationally demanding. Ultrasound images are acquired in real-time and needs to be processed at a high framerate with low latency. Computed tomography (CT) and magnetic resonance imaging (MRI) create large three dimensional volumes with sizes up to 512 × 512 × 800 voxels. In digital pathology, whole slide microscopy images can have an extreme image size of up to 200, 000 × 100, 000 pixels, which does not even fit into the memory of most computers. Thus, there is a need for smart data storage, processing and visualization methods to handle medical image data. The development of FAST started in 2014, the goal was to create an open-source framework which made GPU and parallel processing of medical images easy and portable. While there existed popular image processing libraries such as the visualization toolkit (VTK), insight toolkit (ITK) and OpenCV, the GPU processing capabilities were still implemented ad-hoc and often implied copying data back and forth from the GPU and CPU. Thus it was decided to use the new OpenCL API to create a cross-platform framework designed bottom-up with GPU processing at the very core. One of the design goals was to remove the burden of moving data back and forth from different processors and memory spaces from the developer. Instead, the developer requests access to the data on a given processor, and FAST will copy and update data as needed. Now, seven years later FAST version 3.2 is released, it still uses OpenCL 1.2 and OpenGL 3.3 at the core of almost all of its operations. FAST can stream images in real-time from ultrasound scanners, webcameras, Intel’s RealSense depth camera, and read many different formats from disk including medical formats such as DICOM, Metaimage and huge microscopy images stored as tiled image pyramids. FAST uses a processing pipeline concept, meaning that you define a pipeline as multiple processing and visualization steps first, then initiate the processing by executing the pipeline. The advantages of this is that it’s easy to change data sources and processing steps. The same pipeline used to process an ultrasound image on disk, can be used to process a real-time stream of ultrasound images. Today FAST pipelines can be created with C++, Python 3 and even without any programming using simple text files. The pipeline approach also opens up possibilities for load balancing and tuning based on analyzing the pipeline as computational graphs, although this has not yet been implemented. In the last five years or so, deep neural networks have become the standard for almost all image processing tasks. Many high-performance frameworks for deep neural network inference already exist, but have very different APIs and use different formats for storing neural network models. FAST now provides a common API for neural networks with multiple backends such as NVIDIA’s TensorRT, Intel’s OpenVINO and Google’s TensorFlow. This removes the burden of the user to learn the API of every inference library, and makes neural network inference as simple as just loading a model stored on disk. This presentation will present the FAST framework and how OpenCL was used to make it. The trade-offs between portability/ease-of-use/code complexity and performance has been a constant challenge, often leading to sacrificing performance or having to write multiple versions of the same algorithm to handle different OpenCL implementations. The presentation will also discuss OpenCL features which have been important in developing this framework such as OpenGL interoperability and 2D/3D Images/Textures. FAST is open-source and we invite the community to contribute through GitHub at https://github.com/smistad/FAST","PeriodicalId":73497,"journal":{"name":"International Workshop on OpenCL","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2021-04-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Workshop on OpenCL","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3456669.3456717","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2

Abstract

Medical image processing and visualization is often computationally demanding. Ultrasound images are acquired in real-time and needs to be processed at a high framerate with low latency. Computed tomography (CT) and magnetic resonance imaging (MRI) create large three dimensional volumes with sizes up to 512 × 512 × 800 voxels. In digital pathology, whole slide microscopy images can have an extreme image size of up to 200, 000 × 100, 000 pixels, which does not even fit into the memory of most computers. Thus, there is a need for smart data storage, processing and visualization methods to handle medical image data. The development of FAST started in 2014, the goal was to create an open-source framework which made GPU and parallel processing of medical images easy and portable. While there existed popular image processing libraries such as the visualization toolkit (VTK), insight toolkit (ITK) and OpenCV, the GPU processing capabilities were still implemented ad-hoc and often implied copying data back and forth from the GPU and CPU. Thus it was decided to use the new OpenCL API to create a cross-platform framework designed bottom-up with GPU processing at the very core. One of the design goals was to remove the burden of moving data back and forth from different processors and memory spaces from the developer. Instead, the developer requests access to the data on a given processor, and FAST will copy and update data as needed. Now, seven years later FAST version 3.2 is released, it still uses OpenCL 1.2 and OpenGL 3.3 at the core of almost all of its operations. FAST can stream images in real-time from ultrasound scanners, webcameras, Intel’s RealSense depth camera, and read many different formats from disk including medical formats such as DICOM, Metaimage and huge microscopy images stored as tiled image pyramids. FAST uses a processing pipeline concept, meaning that you define a pipeline as multiple processing and visualization steps first, then initiate the processing by executing the pipeline. The advantages of this is that it’s easy to change data sources and processing steps. The same pipeline used to process an ultrasound image on disk, can be used to process a real-time stream of ultrasound images. Today FAST pipelines can be created with C++, Python 3 and even without any programming using simple text files. The pipeline approach also opens up possibilities for load balancing and tuning based on analyzing the pipeline as computational graphs, although this has not yet been implemented. In the last five years or so, deep neural networks have become the standard for almost all image processing tasks. Many high-performance frameworks for deep neural network inference already exist, but have very different APIs and use different formats for storing neural network models. FAST now provides a common API for neural networks with multiple backends such as NVIDIA’s TensorRT, Intel’s OpenVINO and Google’s TensorFlow. This removes the burden of the user to learn the API of every inference library, and makes neural network inference as simple as just loading a model stored on disk. This presentation will present the FAST framework and how OpenCL was used to make it. The trade-offs between portability/ease-of-use/code complexity and performance has been a constant challenge, often leading to sacrificing performance or having to write multiple versions of the same algorithm to handle different OpenCL implementations. The presentation will also discuss OpenCL features which have been important in developing this framework such as OpenGL interoperability and 2D/3D Images/Textures. FAST is open-source and we invite the community to contribute through GitHub at https://github.com/smistad/FAST
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
FAST:用于高性能医学图像计算和可视化的框架
医学图像处理和可视化通常需要大量的计算量。超声图像是实时获取的,需要以低延迟的高帧率进行处理。计算机断层扫描(CT)和磁共振成像(MRI)可以创建尺寸高达512 × 512 × 800体素的大型三维体积。在数字病理学中,整个玻片显微镜图像可以有一个极端的图像大小高达200000 × 100000像素,这甚至不适合大多数计算机的内存。因此,需要智能数据存储、处理和可视化方法来处理医学图像数据。FAST的开发始于2014年,目标是创建一个开源框架,使GPU和并行处理医学图像变得容易和便携。虽然存在流行的图像处理库,如可视化工具包(VTK)、洞察力工具包(ITK)和OpenCV,但GPU处理能力仍然是临时实现的,通常意味着从GPU和CPU来回复制数据。因此,我们决定使用新的OpenCL API来创建一个以GPU处理为核心的自下而上设计的跨平台框架。设计目标之一是消除开发人员在不同处理器和内存空间之间来回移动数据的负担。相反,开发人员请求访问给定处理器上的数据,FAST将根据需要复制和更新数据。现在,7年过去了,FAST 3.2版本发布了,它仍然使用OpenCL 1.2和OpenGL 3.3作为几乎所有操作的核心。FAST可以实时传输来自超声扫描仪、网络摄像头、英特尔RealSense深度摄像头的图像,并从磁盘读取许多不同的格式,包括医疗格式,如DICOM、Metaimage和存储为平纹图像金字塔的巨大显微镜图像。FAST使用处理管道概念,这意味着您首先将管道定义为多个处理和可视化步骤,然后通过执行管道来启动处理。这样做的优点是很容易更改数据源和处理步骤。同样的流水线用于处理磁盘上的超声图像,也可以用于处理实时的超声图像流。如今,FAST管道可以用c++、Python 3创建,甚至不需要使用简单的文本文件进行任何编程。管道方法还提供了基于将管道分析为计算图的负载平衡和调优的可能性,尽管这还没有实现。在过去五年左右的时间里,深度神经网络已经成为几乎所有图像处理任务的标准。目前已经存在许多用于深度神经网络推理的高性能框架,但它们具有非常不同的api,并且使用不同的格式来存储神经网络模型。FAST现在为具有多个后端(如NVIDIA的TensorRT、Intel的OpenVINO和Google的TensorFlow)的神经网络提供了一个通用API。这消除了用户学习每个推理库API的负担,并使神经网络推理像加载存储在磁盘上的模型一样简单。本演讲将介绍FAST框架以及如何使用OpenCL来制作它。在可移植性/易用性/代码复杂性和性能之间的权衡一直是一个挑战,经常导致牺牲性能或不得不编写相同算法的多个版本来处理不同的OpenCL实现。该演讲还将讨论OpenGL的一些重要特性,如OpenGL互操作性和2D/3D图像/纹理。FAST是开源的,我们邀请社区通过https://github.com/smistad/FAST的GitHub做出贡献
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Improving Performance Portability of the Procedurally Generated High Energy Physics Event Generator MadGraph Using SYCL Acceleration of Quantum Transport Simulations with OpenCL CodePin: An Instrumentation-Based Debug Tool of SYCLomatic An Efficient Approach to Resolving Stack Overflow of SYCL Kernel on Intel® CPUs Ray Tracer based lidar simulation using SYCL
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1