Correction to: Nature Biomedical Engineering https://doi.org/10.1038/s41551-024-01313-4, published online 4 December 2024.
Correction to: Nature Biomedical Engineering https://doi.org/10.1038/s41551-023-01082-6, published online 7 September 2023.
Correction to: Nature Biomedical Engineering https://doi.org/10.1038/s41551-023-01106-1, published online 11 December 2023.
Graph representation learning has been leveraged to identify cancer genes from biological networks. However, its applicability is limited by insufficient interpretability and generalizability under integrative network analysis. Here we report the development of an interpretable and generalizable transformer-based model that accurately predicts cancer genes by leveraging graph representation learning and the integration of multi-omics data with the topologies of homogeneous and heterogeneous networks of biological interactions. The model allows for the interpretation of the respective importance of multi-omic and higher-order structural features, achieved state-of-the-art performance in the prediction of cancer genes across biological networks (including networks of interactions between miRNA and proteins, transcription factors and proteins, and transcription factors and miRNA) in pan-cancer and cancer-specific scenarios, and predicted 57 cancer-gene candidates (including three genes that had not been identified by other models) among 4,729 unlabelled genes across 8 pan-cancer datasets. The model’s interpretability and generalization may facilitate the understanding of gene-related regulatory mechanisms and the discovery of new cancer genes.
In magnetic resonance imaging of the brain, an imaging-preprocessing step removes the skull and other non-brain tissue from the images. But methods for such a skull-stripping process often struggle with large data heterogeneity across medical sites and with dynamic changes in tissue contrast across lifespans. Here we report a skull-stripping model for magnetic resonance images that generalizes across lifespans by leveraging personalized priors from brain atlases. The model consists of a brain extraction module that provides an initial estimation of the brain tissue on an image, and a registration module that derives a personalized prior from an age-specific atlas. The model is substantially more accurate than state-of-the-art skull-stripping methods, as we show with a large and diverse dataset of 21,334 lifespans acquired from 18 sites with various imaging protocols and scanners, and it generates naturally consistent and seamless lifespan changes in brain volume, faithfully charting the underlying biological processes of brain development and ageing.
The development of prophylactic cancer vaccines typically involves the selection of combinations of tumour-associated antigens, tumour-specific antigens and neoantigens. Here we show that membranes from induced pluripotent stem cells can serve as a tumour-antigen pool, and that a nanoparticle vaccine consisting of self-assembled commercial adjuvants wrapped by such membranes robustly stimulated innate immunity, evaded antigen-specific tolerance and activated B-cell and T-cell responses, which were mediated by epitopes from the abundant number of antigens shared between the membranes of tumour cells and pluripotent stem cells. In mice, the vaccine elicited systemic antitumour memory T-cell and B-cell responses as well as tumour-specific immune responses after a tumour challenge, and inhibited the progression of melanoma, colon cancer, breast cancer and post-operative lung metastases. Harnessing antigens shared by pluripotent stem cell membranes and tumour membranes may facilitate the development of universal cancer vaccines.