{"title":"The Information of Large Language Model Geometry","authors":"Zhiquan Tan, Chenghai Li, Weiran Huang","doi":"arxiv-2402.03471","DOIUrl":null,"url":null,"abstract":"This paper investigates the information encoded in the embeddings of large\nlanguage models (LLMs). We conduct simulations to analyze the representation\nentropy and discover a power law relationship with model sizes. Building upon\nthis observation, we propose a theory based on (conditional) entropy to\nelucidate the scaling law phenomenon. Furthermore, we delve into the\nauto-regressive structure of LLMs and examine the relationship between the last\ntoken and previous context tokens using information theory and regression\ntechniques. Specifically, we establish a theoretical connection between the\ninformation gain of new tokens and ridge regression. Additionally, we explore\nthe effectiveness of Lasso regression in selecting meaningful tokens, which\nsometimes outperforms the closely related attention weights. Finally, we\nconduct controlled experiments, and find that information is distributed across\ntokens, rather than being concentrated in specific \"meaningful\" tokens alone.","PeriodicalId":501433,"journal":{"name":"arXiv - CS - Information Theory","volume":"127 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Information Theory","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2402.03471","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
This paper investigates the information encoded in the embeddings of large
language models (LLMs). We conduct simulations to analyze the representation
entropy and discover a power law relationship with model sizes. Building upon
this observation, we propose a theory based on (conditional) entropy to
elucidate the scaling law phenomenon. Furthermore, we delve into the
auto-regressive structure of LLMs and examine the relationship between the last
token and previous context tokens using information theory and regression
techniques. Specifically, we establish a theoretical connection between the
information gain of new tokens and ridge regression. Additionally, we explore
the effectiveness of Lasso regression in selecting meaningful tokens, which
sometimes outperforms the closely related attention weights. Finally, we
conduct controlled experiments, and find that information is distributed across
tokens, rather than being concentrated in specific "meaningful" tokens alone.