{"title":"SyntaxFest 2019特邀演讲-交际主体中的归纳偏见和语言出现","authors":"Emmanuel Dupoux","doi":"10.18653/v1/W19-7701","DOIUrl":null,"url":null,"abstract":"Despite spectacular progress in language modeling tasks, neural networks still fall short of the performance of human infants when it comes to learning a language from scarce and noisy data. Such performance presumably stems from human-specific inductive biases in the neural networks sustaining language acquisitions in the child. Here, we use two paradigms to study experimentally such inductive biases in artificial neural networks. The first one relies on iterative learning, where a sequence of agents learn from each other, simulating historical linguistic transmission. We find evidence that sequence to sequence neural models have some of the human inductive biases (like the preference for local dependencies), but lack others (like the preference for nonredundant markers of argument structure). The second paradigm relies on language emergence, where two agents engage in a communicative game. Here we find that sequence to sequence networks lack the preference for efficient communication found in humans, and in fact display an anti-Zipfian law of abbreviation. We conclude that the study of the inductive biases of neural networks is an important topic to improve the data efficiency of current systems.","PeriodicalId":443459,"journal":{"name":"Proceedings of the Fifth International Conference on Dependency Linguistics (Depling, SyntaxFest 2019)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"SyntaxFest 2019 Invited talk - Inductive biases and language emergence in communicative agents\",\"authors\":\"Emmanuel Dupoux\",\"doi\":\"10.18653/v1/W19-7701\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Despite spectacular progress in language modeling tasks, neural networks still fall short of the performance of human infants when it comes to learning a language from scarce and noisy data. Such performance presumably stems from human-specific inductive biases in the neural networks sustaining language acquisitions in the child. Here, we use two paradigms to study experimentally such inductive biases in artificial neural networks. The first one relies on iterative learning, where a sequence of agents learn from each other, simulating historical linguistic transmission. We find evidence that sequence to sequence neural models have some of the human inductive biases (like the preference for local dependencies), but lack others (like the preference for nonredundant markers of argument structure). The second paradigm relies on language emergence, where two agents engage in a communicative game. Here we find that sequence to sequence networks lack the preference for efficient communication found in humans, and in fact display an anti-Zipfian law of abbreviation. We conclude that the study of the inductive biases of neural networks is an important topic to improve the data efficiency of current systems.\",\"PeriodicalId\":443459,\"journal\":{\"name\":\"Proceedings of the Fifth International Conference on Dependency Linguistics (Depling, SyntaxFest 2019)\",\"volume\":\"8 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"1900-01-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the Fifth International Conference on Dependency Linguistics (Depling, SyntaxFest 2019)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.18653/v1/W19-7701\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the Fifth International Conference on Dependency Linguistics (Depling, SyntaxFest 2019)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.18653/v1/W19-7701","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
SyntaxFest 2019 Invited talk - Inductive biases and language emergence in communicative agents
Despite spectacular progress in language modeling tasks, neural networks still fall short of the performance of human infants when it comes to learning a language from scarce and noisy data. Such performance presumably stems from human-specific inductive biases in the neural networks sustaining language acquisitions in the child. Here, we use two paradigms to study experimentally such inductive biases in artificial neural networks. The first one relies on iterative learning, where a sequence of agents learn from each other, simulating historical linguistic transmission. We find evidence that sequence to sequence neural models have some of the human inductive biases (like the preference for local dependencies), but lack others (like the preference for nonredundant markers of argument structure). The second paradigm relies on language emergence, where two agents engage in a communicative game. Here we find that sequence to sequence networks lack the preference for efficient communication found in humans, and in fact display an anti-Zipfian law of abbreviation. We conclude that the study of the inductive biases of neural networks is an important topic to improve the data efficiency of current systems.