Franz Nowak, Anej Svete, Alexandra Butoi, Ryan Cotterell
{"title":"论具有思维链推理能力的神经语言模型的表征能力","authors":"Franz Nowak, Anej Svete, Alexandra Butoi, Ryan Cotterell","doi":"arxiv-2406.14197","DOIUrl":null,"url":null,"abstract":"The performance of modern language models (LMs) has been improved by\nchain-of-thought (CoT) reasoning, i.e., the process of generating intermediate\nresults that guide the model towards a final answer. A possible explanation for\nthis improvement is that CoT reasoning extends an LM's computational power, as\nRNNs and transformers with additional scratch space are known to be Turing\ncomplete. Comparing LMs to Turing machines, however, introduces a category\nerror - Turing machines decide language membership, whereas LMs define\ndistributions over strings. To bridge this gap, we formalize CoT reasoning in a\nprobabilistic setting. We present several results on the representational\ncapacity of recurrent and transformer LMs with CoT reasoning, showing that they\ncan represent the same family of distributions over strings as probabilistic\nTuring machines.","PeriodicalId":501124,"journal":{"name":"arXiv - CS - Formal Languages and Automata Theory","volume":"44 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"On the Representational Capacity of Neural Language Models with Chain-of-Thought Reasoning\",\"authors\":\"Franz Nowak, Anej Svete, Alexandra Butoi, Ryan Cotterell\",\"doi\":\"arxiv-2406.14197\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The performance of modern language models (LMs) has been improved by\\nchain-of-thought (CoT) reasoning, i.e., the process of generating intermediate\\nresults that guide the model towards a final answer. A possible explanation for\\nthis improvement is that CoT reasoning extends an LM's computational power, as\\nRNNs and transformers with additional scratch space are known to be Turing\\ncomplete. Comparing LMs to Turing machines, however, introduces a category\\nerror - Turing machines decide language membership, whereas LMs define\\ndistributions over strings. To bridge this gap, we formalize CoT reasoning in a\\nprobabilistic setting. We present several results on the representational\\ncapacity of recurrent and transformer LMs with CoT reasoning, showing that they\\ncan represent the same family of distributions over strings as probabilistic\\nTuring machines.\",\"PeriodicalId\":501124,\"journal\":{\"name\":\"arXiv - CS - Formal Languages and Automata Theory\",\"volume\":\"44 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-06-20\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Formal Languages and Automata Theory\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2406.14197\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Formal Languages and Automata Theory","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2406.14197","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
On the Representational Capacity of Neural Language Models with Chain-of-Thought Reasoning
The performance of modern language models (LMs) has been improved by
chain-of-thought (CoT) reasoning, i.e., the process of generating intermediate
results that guide the model towards a final answer. A possible explanation for
this improvement is that CoT reasoning extends an LM's computational power, as
RNNs and transformers with additional scratch space are known to be Turing
complete. Comparing LMs to Turing machines, however, introduces a category
error - Turing machines decide language membership, whereas LMs define
distributions over strings. To bridge this gap, we formalize CoT reasoning in a
probabilistic setting. We present several results on the representational
capacity of recurrent and transformer LMs with CoT reasoning, showing that they
can represent the same family of distributions over strings as probabilistic
Turing machines.