{"title":"Propagating the prior from shallow to deep with a pre-trained velocity-model Generative Transformer network","authors":"Randy Harsuko, Shijun Cheng, Tariq Alkhalifah","doi":"arxiv-2408.09767","DOIUrl":null,"url":null,"abstract":"Building subsurface velocity models is essential to our goals in utilizing\nseismic data for Earth discovery and exploration, as well as monitoring. With\nthe dawn of machine learning, these velocity models (or, more precisely, their\ndistribution) can be stored accurately and efficiently in a generative model.\nThese stored velocity model distributions can be utilized to regularize or\nquantify uncertainties in inverse problems, like full waveform inversion.\nHowever, most generators, like normalizing flows or diffusion models, treat the\nimage (velocity model) uniformly, disregarding spatial dependencies and\nresolution changes with respect to the observation locations. To address this\nweakness, we introduce VelocityGPT, a novel implementation that utilizes\nTransformer decoders trained autoregressively to generate a velocity model from\nshallow subsurface to deep. Owing to the fact that seismic data are often\nrecorded on the Earth's surface, a top-down generator can utilize the inverted\ninformation in the shallow as guidance (prior) to generating the deep. To\nfacilitate the implementation, we use an additional network to compress the\nvelocity model. We also inject prior information, like well or structure\n(represented by a migration image) to generate the velocity model. Using\nsynthetic data, we demonstrate the effectiveness of VelocityGPT as a promising\napproach in generative model applications for seismic velocity model building.","PeriodicalId":501270,"journal":{"name":"arXiv - PHYS - Geophysics","volume":"13 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-08-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - PHYS - Geophysics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2408.09767","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Building subsurface velocity models is essential to our goals in utilizing
seismic data for Earth discovery and exploration, as well as monitoring. With
the dawn of machine learning, these velocity models (or, more precisely, their
distribution) can be stored accurately and efficiently in a generative model.
These stored velocity model distributions can be utilized to regularize or
quantify uncertainties in inverse problems, like full waveform inversion.
However, most generators, like normalizing flows or diffusion models, treat the
image (velocity model) uniformly, disregarding spatial dependencies and
resolution changes with respect to the observation locations. To address this
weakness, we introduce VelocityGPT, a novel implementation that utilizes
Transformer decoders trained autoregressively to generate a velocity model from
shallow subsurface to deep. Owing to the fact that seismic data are often
recorded on the Earth's surface, a top-down generator can utilize the inverted
information in the shallow as guidance (prior) to generating the deep. To
facilitate the implementation, we use an additional network to compress the
velocity model. We also inject prior information, like well or structure
(represented by a migration image) to generate the velocity model. Using
synthetic data, we demonstrate the effectiveness of VelocityGPT as a promising
approach in generative model applications for seismic velocity model building.