用于空间解析转录组学数据的具有对比增强功能的屏蔽图自动编码器

Donghai Fang, Fangfang Zhu, Dongting Xie, Wenwen Min
{"title":"用于空间解析转录组学数据的具有对比增强功能的屏蔽图自动编码器","authors":"Donghai Fang, Fangfang Zhu, Dongting Xie, Wenwen Min","doi":"arxiv-2408.06377","DOIUrl":null,"url":null,"abstract":"With the rapid advancement of Spatial Resolved Transcriptomics (SRT)\ntechnology, it is now possible to comprehensively measure gene transcription\nwhile preserving the spatial context of tissues. Spatial domain identification\nand gene denoising are key objectives in SRT data analysis. We propose a\nContrastively Augmented Masked Graph Autoencoder (STMGAC) to learn\nlow-dimensional latent representations for domain identification. In the latent\nspace, persistent signals for representations are obtained through\nself-distillation to guide self-supervised matching. At the same time, positive\nand negative anchor pairs are constructed using triplet learning to augment the\ndiscriminative ability. We evaluated the performance of STMGAC on five\ndatasets, achieving results superior to those of existing baseline methods. All\ncode and public datasets used in this paper are available at\nhttps://github.com/wenwenmin/STMGAC and https://zenodo.org/records/13253801.","PeriodicalId":501070,"journal":{"name":"arXiv - QuanBio - Genomics","volume":"9 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-08-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Masked Graph Autoencoders with Contrastive Augmentation for Spatially Resolved Transcriptomics Data\",\"authors\":\"Donghai Fang, Fangfang Zhu, Dongting Xie, Wenwen Min\",\"doi\":\"arxiv-2408.06377\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"With the rapid advancement of Spatial Resolved Transcriptomics (SRT)\\ntechnology, it is now possible to comprehensively measure gene transcription\\nwhile preserving the spatial context of tissues. Spatial domain identification\\nand gene denoising are key objectives in SRT data analysis. We propose a\\nContrastively Augmented Masked Graph Autoencoder (STMGAC) to learn\\nlow-dimensional latent representations for domain identification. In the latent\\nspace, persistent signals for representations are obtained through\\nself-distillation to guide self-supervised matching. At the same time, positive\\nand negative anchor pairs are constructed using triplet learning to augment the\\ndiscriminative ability. We evaluated the performance of STMGAC on five\\ndatasets, achieving results superior to those of existing baseline methods. All\\ncode and public datasets used in this paper are available at\\nhttps://github.com/wenwenmin/STMGAC and https://zenodo.org/records/13253801.\",\"PeriodicalId\":501070,\"journal\":{\"name\":\"arXiv - QuanBio - Genomics\",\"volume\":\"9 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-08-09\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - QuanBio - Genomics\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2408.06377\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - QuanBio - Genomics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2408.06377","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

随着空间分辨转录组学(SRT)技术的快速发展,现在可以在保留组织空间背景的同时全面测量基因转录。空间域识别和基因去噪是 SRT 数据分析的关键目标。我们提出了一种对比增强屏蔽图自动编码器(STMGAC)来学习低维潜在表征,以进行域识别。在潜空间中,通过自我蒸馏获得表征的持续信号,从而指导自我监督匹配。同时,利用三元组学习构建正负锚对,以增强识别能力。我们在五个数据集上评估了 STMGAC 的性能,结果优于现有的基线方法。本文使用的所有代码和公开数据集可在https://github.com/wenwenmin/STMGAC 和 https://zenodo.org/records/13253801 上获取。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Masked Graph Autoencoders with Contrastive Augmentation for Spatially Resolved Transcriptomics Data
With the rapid advancement of Spatial Resolved Transcriptomics (SRT) technology, it is now possible to comprehensively measure gene transcription while preserving the spatial context of tissues. Spatial domain identification and gene denoising are key objectives in SRT data analysis. We propose a Contrastively Augmented Masked Graph Autoencoder (STMGAC) to learn low-dimensional latent representations for domain identification. In the latent space, persistent signals for representations are obtained through self-distillation to guide self-supervised matching. At the same time, positive and negative anchor pairs are constructed using triplet learning to augment the discriminative ability. We evaluated the performance of STMGAC on five datasets, achieving results superior to those of existing baseline methods. All code and public datasets used in this paper are available at https://github.com/wenwenmin/STMGAC and https://zenodo.org/records/13253801.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Allium Vegetables Intake and Digestive System Cancer Risk: A Study Based on Mendelian Randomization, Network Pharmacology and Molecular Docking wgatools: an ultrafast toolkit for manipulating whole genome alignments Selecting Differential Splicing Methods: Practical Considerations Advancements in colored k-mer sets: essentials for the curious Advancements in practical k-mer sets: essentials for the curious
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1