{"title":"基于改良变压器的伪星胶球质量估算","authors":"Lin Gao","doi":"arxiv-2408.13280","DOIUrl":null,"url":null,"abstract":"A modified Transformer model is introduced for estimating the mass of\npseudoscalar glueball in lattice QCD. The model takes as input a sequence of\nfloating-point numbers with lengths ranging from 30 to 35 and produces a\ntwo-dimensional vector output. It integrates floating-point embeddings and\npositional encoding, and is trained using binary cross-entropy loss. The paper\nprovides a detailed description of the model's components and training methods,\nand compares the performance of the traditional least squares method, the\npreviously used deep neural network, and the modified Transformer in mass\nestimation. The results show that the modified Transformer model achieves\ngreater accuracy in mass estimation than the traditional least squares method.\nAdditionally, compared to the deep neural network, this model utilizes\npositional encoding and can handle input sequences of varying lengths, offering\nenhanced adaptability.","PeriodicalId":501191,"journal":{"name":"arXiv - PHYS - High Energy Physics - Lattice","volume":"30 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Estimation of the pseudoscalar glueball mass based on a modified Transformer\",\"authors\":\"Lin Gao\",\"doi\":\"arxiv-2408.13280\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"A modified Transformer model is introduced for estimating the mass of\\npseudoscalar glueball in lattice QCD. The model takes as input a sequence of\\nfloating-point numbers with lengths ranging from 30 to 35 and produces a\\ntwo-dimensional vector output. It integrates floating-point embeddings and\\npositional encoding, and is trained using binary cross-entropy loss. The paper\\nprovides a detailed description of the model's components and training methods,\\nand compares the performance of the traditional least squares method, the\\npreviously used deep neural network, and the modified Transformer in mass\\nestimation. The results show that the modified Transformer model achieves\\ngreater accuracy in mass estimation than the traditional least squares method.\\nAdditionally, compared to the deep neural network, this model utilizes\\npositional encoding and can handle input sequences of varying lengths, offering\\nenhanced adaptability.\",\"PeriodicalId\":501191,\"journal\":{\"name\":\"arXiv - PHYS - High Energy Physics - Lattice\",\"volume\":\"30 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-08-22\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - PHYS - High Energy Physics - Lattice\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2408.13280\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - PHYS - High Energy Physics - Lattice","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2408.13280","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Estimation of the pseudoscalar glueball mass based on a modified Transformer
A modified Transformer model is introduced for estimating the mass of
pseudoscalar glueball in lattice QCD. The model takes as input a sequence of
floating-point numbers with lengths ranging from 30 to 35 and produces a
two-dimensional vector output. It integrates floating-point embeddings and
positional encoding, and is trained using binary cross-entropy loss. The paper
provides a detailed description of the model's components and training methods,
and compares the performance of the traditional least squares method, the
previously used deep neural network, and the modified Transformer in mass
estimation. The results show that the modified Transformer model achieves
greater accuracy in mass estimation than the traditional least squares method.
Additionally, compared to the deep neural network, this model utilizes
positional encoding and can handle input sequences of varying lengths, offering
enhanced adaptability.