Memory-efficient semantic segmentation of large microscopy images using graph-based neural networks.

Atishay Jain, David H Laidlaw, Peter Bajcsy, Ritambhara Singh
{"title":"Memory-efficient semantic segmentation of large microscopy images using graph-based neural networks.","authors":"Atishay Jain, David H Laidlaw, Peter Bajcsy, Ritambhara Singh","doi":"10.1093/jmicro/dfad049","DOIUrl":null,"url":null,"abstract":"<p><p>We present a graph neural network (GNN)-based framework applied to large-scale microscopy image segmentation tasks. While deep learning models, like convolutional neural networks (CNNs), have become common for automating image segmentation tasks, they are limited by the image size that can fit in the memory of computational hardware. In a GNN framework, large-scale images are converted into graphs using superpixels (regions of pixels with similar color/intensity values), allowing us to input information from the entire image into the model. By converting images with hundreds of millions of pixels to graphs with thousands of nodes, we can segment large images using memory-limited computational resources. We compare the performance of GNN- and CNN-based segmentation in terms of accuracy, training time and required graphics processing unit memory. Based on our experiments with microscopy images of biological cells and cell colonies, GNN-based segmentation used one to three orders-of-magnitude fewer computational resources with only a change in accuracy of ‒2 % to +0.3 %. Furthermore, errors due to superpixel generation can be reduced by either using better superpixel generation algorithms or increasing the number of superpixels, thereby allowing for improvement in the GNN framework's accuracy. This trade-off between accuracy and computational cost over CNN models makes the GNN framework attractive for many large-scale microscopy image segmentation tasks in biology.</p>","PeriodicalId":74193,"journal":{"name":"Microscopy (Oxford, England)","volume":" ","pages":"275-286"},"PeriodicalIF":0.0000,"publicationDate":"2024-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Microscopy (Oxford, England)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1093/jmicro/dfad049","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

We present a graph neural network (GNN)-based framework applied to large-scale microscopy image segmentation tasks. While deep learning models, like convolutional neural networks (CNNs), have become common for automating image segmentation tasks, they are limited by the image size that can fit in the memory of computational hardware. In a GNN framework, large-scale images are converted into graphs using superpixels (regions of pixels with similar color/intensity values), allowing us to input information from the entire image into the model. By converting images with hundreds of millions of pixels to graphs with thousands of nodes, we can segment large images using memory-limited computational resources. We compare the performance of GNN- and CNN-based segmentation in terms of accuracy, training time and required graphics processing unit memory. Based on our experiments with microscopy images of biological cells and cell colonies, GNN-based segmentation used one to three orders-of-magnitude fewer computational resources with only a change in accuracy of ‒2 % to +0.3 %. Furthermore, errors due to superpixel generation can be reduced by either using better superpixel generation algorithms or increasing the number of superpixels, thereby allowing for improvement in the GNN framework's accuracy. This trade-off between accuracy and computational cost over CNN models makes the GNN framework attractive for many large-scale microscopy image segmentation tasks in biology.

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
使用基于图的神经网络对大型显微镜图像进行记忆高效的语义分割。
我们提出了一个基于图神经网络(GNN)的框架,应用于大规模显微镜图像分割任务。虽然像卷积神经网络(CNNs)这样的深度学习模型在自动图像分割任务中已经变得很常见,但它们受到可以放入计算硬件内存的图像大小的限制。在GNN框架中,使用超像素(具有相似颜色/强度值的像素区域)将大规模图像转换为图,使我们能够将整个图像的信息输入到模型中。通过将具有数亿像素的图像转换为具有数千个节点的图,我们可以使用内存有限的计算资源对大图像进行分割。我们比较了基于GNN和CNN的分割在准确性、训练时间和所需图形处理单元内存方面的性能。基于我们对生物细胞和细胞集落的显微镜图像的实验,基于GNN的分割使用的计算资源减少了一到三个数量级,精度仅变化为$-2\;%$至$+0.3\;%$。此外,可以通过使用更好的超像素生成算法或增加超像素的数量来减少由于超像素生成引起的误差,从而允许提高GNN框架的精度。与CNN模型相比,这种准确性和计算成本之间的权衡使GNN框架对生物学中的许多大规模显微镜图像分割任务具有吸引力。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Correction to: Low-dose measurement of electric potential distribution in organic light-emitting diode by phase-shifting electron holography with 3D tensor decomposition. Correction to: Structures of multisubunit membrane complexes with the CRYO ARM 200. Momentum-resolved EELS and CL study on 1D-plasmonic crystal prepared by FIB method. Bayesian inference of atomic column positions in scanning transmission electron microscopy images. Fast computational approach with prior dimension reduction for three-dimensional chemical component analysis using CT data of spectral imaging.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1