Pub Date : 2021-11-23DOI: 10.1146/annurev-linguistics-031120-122924
Ellie Pavlick
Deep learning has recently come to dominate computational linguistics, leading to claims of human-level performance in a range of language processing tasks. Like much previous computational work, deep learning–based linguistic representations adhere to the distributional meaning-in-use hypothesis, deriving semantic representations from word co-occurrence statistics. However, current deep learning methods entail fundamentally new models of lexical and compositional meaning that are ripe for theoretical analysis. Whereas traditional distributional semantics models take a bottom-up approach in which sentence meaning is characterized by explicit composition functions applied to word meanings, new approaches take a top-down approach in which sentence representations are treated as primary and representations of words and syntax are viewed as emergent. This article summarizes our current understanding of how well such representations capture lexical semantics, world knowledge, and composition. The goal is to foster increased collaboration on testing the implications of such representations as general-purpose models of semantics. Expected final online publication date for the Annual Review of Linguistics, Volume 8 is January 2022. Please see http://www.annualreviews.org/page/journal/pubdates for revised estimates.
{"title":"Semantic Structure in Deep Learning","authors":"Ellie Pavlick","doi":"10.1146/annurev-linguistics-031120-122924","DOIUrl":"https://doi.org/10.1146/annurev-linguistics-031120-122924","url":null,"abstract":"Deep learning has recently come to dominate computational linguistics, leading to claims of human-level performance in a range of language processing tasks. Like much previous computational work, deep learning–based linguistic representations adhere to the distributional meaning-in-use hypothesis, deriving semantic representations from word co-occurrence statistics. However, current deep learning methods entail fundamentally new models of lexical and compositional meaning that are ripe for theoretical analysis. Whereas traditional distributional semantics models take a bottom-up approach in which sentence meaning is characterized by explicit composition functions applied to word meanings, new approaches take a top-down approach in which sentence representations are treated as primary and representations of words and syntax are viewed as emergent. This article summarizes our current understanding of how well such representations capture lexical semantics, world knowledge, and composition. The goal is to foster increased collaboration on testing the implications of such representations as general-purpose models of semantics. Expected final online publication date for the Annual Review of Linguistics, Volume 8 is January 2022. Please see http://www.annualreviews.org/page/journal/pubdates for revised estimates.","PeriodicalId":45803,"journal":{"name":"Annual Review of Linguistics","volume":"30 1","pages":""},"PeriodicalIF":3.2,"publicationDate":"2021-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88284915","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-11-18DOI: 10.1146/annurev-linguistics-051421-020803
J. Hale, Luca Campanelli, Jixing Li, Christophe Pallier, Jonathan Brennan
Efforts to understand the brain bases of language face the Mapping Problem: At what level do linguistic computations and representations connect to human neurobiology? We review one approach to this problem that relies on rigorously defined computational models to specify the links between linguistic features and neural signals. Such tools can be used to estimate linguistic predictions, model linguistic features, and specify a sequence of processing steps that may be quantitatively fit to neural signals collected while participants use language. Progress has been helped by advances in machine learning, attention to linguistically interpretable models, and openly shared data sets that allow researchers to compare and contrast a variety of models. We describe one such data set in detail in the Supplementary Appendix. Expected final online publication date for the Annual Review of Linguistics, Volume 8 is January 2022. Please see http://www.annualreviews.org/page/journal/pubdates for revised estimates.
{"title":"Neurocomputational Models of Language Processing","authors":"J. Hale, Luca Campanelli, Jixing Li, Christophe Pallier, Jonathan Brennan","doi":"10.1146/annurev-linguistics-051421-020803","DOIUrl":"https://doi.org/10.1146/annurev-linguistics-051421-020803","url":null,"abstract":"Efforts to understand the brain bases of language face the Mapping Problem: At what level do linguistic computations and representations connect to human neurobiology? We review one approach to this problem that relies on rigorously defined computational models to specify the links between linguistic features and neural signals. Such tools can be used to estimate linguistic predictions, model linguistic features, and specify a sequence of processing steps that may be quantitatively fit to neural signals collected while participants use language. Progress has been helped by advances in machine learning, attention to linguistically interpretable models, and openly shared data sets that allow researchers to compare and contrast a variety of models. We describe one such data set in detail in the Supplementary Appendix. Expected final online publication date for the Annual Review of Linguistics, Volume 8 is January 2022. Please see http://www.annualreviews.org/page/journal/pubdates for revised estimates.","PeriodicalId":45803,"journal":{"name":"Annual Review of Linguistics","volume":"103 1","pages":""},"PeriodicalIF":3.2,"publicationDate":"2021-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80712099","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-11-15DOI: 10.1146/annurev-linguistics-031120-122120
Marvin Lavechin, Maureen de Seyssel, Lucas Gautheron, E. Dupoux, Alejandrina Cristià
Language use in everyday life can be studied using lightweight, wearable recorders that collect long-form recordings—that is, audio (including speech) over whole days. The hardware and software underlying this technique are increasingly accessible and inexpensive, and these data are revolutionizing the language acquisition field. We first place this technique into the broader context of the current ways of studying both the input being received by children and children's own language production, laying out the main advantages and drawbacks of long-form recordings. We then go on to argue that a unique advantage of long-form recordings is that they can fuel realistic models of early language acquisition that use speech to represent children's input and/or to establish production benchmarks. To enable the field to make the most of this unique empirical and conceptual contribution, we outline what this reverse engineering approach from long-form recordings entails, why it is useful, and how to evaluate success. Expected final online publication date for the Annual Review of Linguistics, Volume 8 is January 2022. Please see http://www.annualreviews.org/page/journal/pubdates for revised estimates.
{"title":"Reverse Engineering Language Acquisition with Child-Centered Long-Form Recordings","authors":"Marvin Lavechin, Maureen de Seyssel, Lucas Gautheron, E. Dupoux, Alejandrina Cristià","doi":"10.1146/annurev-linguistics-031120-122120","DOIUrl":"https://doi.org/10.1146/annurev-linguistics-031120-122120","url":null,"abstract":"Language use in everyday life can be studied using lightweight, wearable recorders that collect long-form recordings—that is, audio (including speech) over whole days. The hardware and software underlying this technique are increasingly accessible and inexpensive, and these data are revolutionizing the language acquisition field. We first place this technique into the broader context of the current ways of studying both the input being received by children and children's own language production, laying out the main advantages and drawbacks of long-form recordings. We then go on to argue that a unique advantage of long-form recordings is that they can fuel realistic models of early language acquisition that use speech to represent children's input and/or to establish production benchmarks. To enable the field to make the most of this unique empirical and conceptual contribution, we outline what this reverse engineering approach from long-form recordings entails, why it is useful, and how to evaluate success. Expected final online publication date for the Annual Review of Linguistics, Volume 8 is January 2022. Please see http://www.annualreviews.org/page/journal/pubdates for revised estimates.","PeriodicalId":45803,"journal":{"name":"Annual Review of Linguistics","volume":"4 1","pages":""},"PeriodicalIF":3.2,"publicationDate":"2021-11-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79690687","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-11-15DOI: 10.1146/annurev-linguistics-031120-121256
S. Kiesling
Stance and stancetaking are considered here as related concepts that help to explain the patterning of language and the motivations for the use of lexical items, constructions, and discourse markers. I begin with a discussion of how stance can be used in variation analysis to help explain the patterning of variables and directions of change, and how stance is central in any understanding of the indexicality of sociolinguistic variables. I then provide a discussion of several approaches to theorizing stance and explicate a stance model that combines a number of these approaches, arguing that such a model should include three dimensions: evaluation, alignment, and investment. Finally, I outline several ways that stance has been operationalized in quantitative analyses, including analyses based on the model outlined. Expected final online publication date for the Annual Review of Linguistics, Volume 8 is January 2022. Please see http://www.annualreviews.org/page/journal/pubdates for revised estimates.
{"title":"Stance and Stancetaking","authors":"S. Kiesling","doi":"10.1146/annurev-linguistics-031120-121256","DOIUrl":"https://doi.org/10.1146/annurev-linguistics-031120-121256","url":null,"abstract":"Stance and stancetaking are considered here as related concepts that help to explain the patterning of language and the motivations for the use of lexical items, constructions, and discourse markers. I begin with a discussion of how stance can be used in variation analysis to help explain the patterning of variables and directions of change, and how stance is central in any understanding of the indexicality of sociolinguistic variables. I then provide a discussion of several approaches to theorizing stance and explicate a stance model that combines a number of these approaches, arguing that such a model should include three dimensions: evaluation, alignment, and investment. Finally, I outline several ways that stance has been operationalized in quantitative analyses, including analyses based on the model outlined. Expected final online publication date for the Annual Review of Linguistics, Volume 8 is January 2022. Please see http://www.annualreviews.org/page/journal/pubdates for revised estimates.","PeriodicalId":45803,"journal":{"name":"Annual Review of Linguistics","volume":"31 1","pages":""},"PeriodicalIF":3.2,"publicationDate":"2021-11-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79352501","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-10-14DOI: 10.1146/annurev-linguistics-032521-053009
V. Hacquard, J. Lidz
Attitude verbs, such as think, want, and know, describe internal mental states that leave few cues as to their meanings in the physical world. Consequently, their acquisition requires learners to draw from indirect evidence stemming from the linguistic and conversational contexts in which they occur. This provides us a unique opportunity to probe the linguistic and cognitive abilities that children deploy in acquiring these words. Through a few case studies, we show how children make use of syntactic and pragmatic cues to figure out attitude verb meanings and how their successes, and even their mistakes, reveal remarkable conceptual, linguistic, and pragmatic sophistication. Expected final online publication date for the Annual Review of Linguistics, Volume 8 is January 2022. Please see http://www.annualreviews.org/page/journal/pubdates for revised estimates.
{"title":"On the Acquisition of Attitude Verbs","authors":"V. Hacquard, J. Lidz","doi":"10.1146/annurev-linguistics-032521-053009","DOIUrl":"https://doi.org/10.1146/annurev-linguistics-032521-053009","url":null,"abstract":"Attitude verbs, such as think, want, and know, describe internal mental states that leave few cues as to their meanings in the physical world. Consequently, their acquisition requires learners to draw from indirect evidence stemming from the linguistic and conversational contexts in which they occur. This provides us a unique opportunity to probe the linguistic and cognitive abilities that children deploy in acquiring these words. Through a few case studies, we show how children make use of syntactic and pragmatic cues to figure out attitude verb meanings and how their successes, and even their mistakes, reveal remarkable conceptual, linguistic, and pragmatic sophistication. Expected final online publication date for the Annual Review of Linguistics, Volume 8 is January 2022. Please see http://www.annualreviews.org/page/journal/pubdates for revised estimates.","PeriodicalId":45803,"journal":{"name":"Annual Review of Linguistics","volume":"20 1","pages":""},"PeriodicalIF":3.2,"publicationDate":"2021-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75786011","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-10-08DOI: 10.1146/annurev-linguistics-032521-053234
Joshua K. Hartshorne
While it is clear that children are more successful at learning language than adults are—whether first language or second—there is no agreement as to why. Is it due to greater neural plasticity, greater motivation, more ample opportunity for learning, superior cognitive function, lack of interference from a first language, or something else? A difficulty in teasing apart these theories is that while they make different empirical predictions, there are few unambiguous facts against which to test the theories. This is particularly true when it comes to the most basic questions about the phenomenon: When does the childhood advantage dissipate, and how rapidly does it do so? I argue that a major reason for the lack of consensus is limitations in the research methods used to date. I conclude by discussing a recently emerging methodology and by making suggestions about the path forward. Expected final online publication date for the Annual Review of Linguistics, Volume 8 is January 2022. Please see http://www.annualreviews.org/page/journal/pubdates for revised estimates.
{"title":"When Do Children Lose the Language Instinct? A Critical Review of the Critical Periods Literature","authors":"Joshua K. Hartshorne","doi":"10.1146/annurev-linguistics-032521-053234","DOIUrl":"https://doi.org/10.1146/annurev-linguistics-032521-053234","url":null,"abstract":"While it is clear that children are more successful at learning language than adults are—whether first language or second—there is no agreement as to why. Is it due to greater neural plasticity, greater motivation, more ample opportunity for learning, superior cognitive function, lack of interference from a first language, or something else? A difficulty in teasing apart these theories is that while they make different empirical predictions, there are few unambiguous facts against which to test the theories. This is particularly true when it comes to the most basic questions about the phenomenon: When does the childhood advantage dissipate, and how rapidly does it do so? I argue that a major reason for the lack of consensus is limitations in the research methods used to date. I conclude by discussing a recently emerging methodology and by making suggestions about the path forward. Expected final online publication date for the Annual Review of Linguistics, Volume 8 is January 2022. Please see http://www.annualreviews.org/page/journal/pubdates for revised estimates.","PeriodicalId":45803,"journal":{"name":"Annual Review of Linguistics","volume":"20 1","pages":""},"PeriodicalIF":3.2,"publicationDate":"2021-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76043560","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-10-05DOI: 10.1146/annurev-linguistics-031120-015515
K. Erk
This article provides an overview of graded and probabilistic approaches in semantics and pragmatics. These approaches share a common set of core research goals: ( a) a concern with phenomena that are best described as graded, including a vast lexicon of words whose meaning adapts flexibly to the contexts in which they are used, as well as reasoning under uncertainty about interlocutors, their goals, and their strategies; ( b) the need to show that representations are learnable, that a listener can learn semantic representations and pragmatic reasoning from data; ( c) an emphasis on empirical evaluation against experimental data or corpus data at scale; and ( d) scaling up to the full size of the lexicon. The methods used are sometimes explicitly probabilistic and sometimes not. Previously, there were assumed to be clear boundaries among probabilistic frameworks, classifiers in machine learning, and distributional approaches, but these boundaries have been blurred. Frameworks in semantics and pragmatics use all three of these, sometimes in combination, to address the four core research questions above. Expected final online publication date for the Annual Review of Linguistics, Volume 8 is January 2022. Please see http://www.annualreviews.org/page/journal/pubdates for revised estimates.
{"title":"The Probabilistic Turn in Semantics and Pragmatics","authors":"K. Erk","doi":"10.1146/annurev-linguistics-031120-015515","DOIUrl":"https://doi.org/10.1146/annurev-linguistics-031120-015515","url":null,"abstract":"This article provides an overview of graded and probabilistic approaches in semantics and pragmatics. These approaches share a common set of core research goals: ( a) a concern with phenomena that are best described as graded, including a vast lexicon of words whose meaning adapts flexibly to the contexts in which they are used, as well as reasoning under uncertainty about interlocutors, their goals, and their strategies; ( b) the need to show that representations are learnable, that a listener can learn semantic representations and pragmatic reasoning from data; ( c) an emphasis on empirical evaluation against experimental data or corpus data at scale; and ( d) scaling up to the full size of the lexicon. The methods used are sometimes explicitly probabilistic and sometimes not. Previously, there were assumed to be clear boundaries among probabilistic frameworks, classifiers in machine learning, and distributional approaches, but these boundaries have been blurred. Frameworks in semantics and pragmatics use all three of these, sometimes in combination, to address the four core research questions above. Expected final online publication date for the Annual Review of Linguistics, Volume 8 is January 2022. Please see http://www.annualreviews.org/page/journal/pubdates for revised estimates.","PeriodicalId":45803,"journal":{"name":"Annual Review of Linguistics","volume":"45 1","pages":""},"PeriodicalIF":3.2,"publicationDate":"2021-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78786565","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-10-04DOI: 10.1146/annurev-linguistics-031120-021042
E. Maier, M. Steinbach
Languages offer various ways to present what someone said, thought, imagined, felt, and so on from their perspective. The prototypical example of a perspective-shifting device is direct quotation. In this review we define perspective shift in terms of indexical shift: A direct quotation like “Selena said, ‘Oh, I don't know.’” involves perspective shift because the first-person indexical ‘I’ refers to Selena, not to the actual speaker. We then discuss a variety of noncanonical modality-specific perspective-shifting devices: role shift in signed language, quotatives in spoken language, free indirect discourse in written language, and point-of-view shift in visual language. We show that these devices permit complex mixed forms of perspective shift which may involve nonlinguistic gestural as well as visual components. Expected final online publication date for the Annual Review of Linguistics, Volume 8 is January 2022. Please see http://www.annualreviews.org/page/journal/pubdates for revised estimates.
{"title":"Perspective Shift Across Modalities","authors":"E. Maier, M. Steinbach","doi":"10.1146/annurev-linguistics-031120-021042","DOIUrl":"https://doi.org/10.1146/annurev-linguistics-031120-021042","url":null,"abstract":"Languages offer various ways to present what someone said, thought, imagined, felt, and so on from their perspective. The prototypical example of a perspective-shifting device is direct quotation. In this review we define perspective shift in terms of indexical shift: A direct quotation like “Selena said, ‘Oh, I don't know.’” involves perspective shift because the first-person indexical ‘I’ refers to Selena, not to the actual speaker. We then discuss a variety of noncanonical modality-specific perspective-shifting devices: role shift in signed language, quotatives in spoken language, free indirect discourse in written language, and point-of-view shift in visual language. We show that these devices permit complex mixed forms of perspective shift which may involve nonlinguistic gestural as well as visual components. Expected final online publication date for the Annual Review of Linguistics, Volume 8 is January 2022. Please see http://www.annualreviews.org/page/journal/pubdates for revised estimates.","PeriodicalId":45803,"journal":{"name":"Annual Review of Linguistics","volume":"7 1","pages":""},"PeriodicalIF":3.2,"publicationDate":"2021-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85387673","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-09-17DOI: 10.1146/annurev-linguistics-031120-115118
J. Audring
In recent years, construction-based approaches to morphology have gained ground in the research community. This framework is characterized by the assumption that the mental lexicon is extensive and richly structured, containing not only a large number of stored words but also a wide variety of generalizations in the form of schemas. This review explores two construction-based theories, Construction Morphology and Relational Morphology. After outlining the basic theoretical architecture, the article presents an array of recent applications of a construction-based approach to morphological phenomena in various languages. In addition, it offers reflections on challenges and opportunities for further research. The review highlights those aspects of the theory that have proved particularly helpful in accommodating both the regularities and the quirks that are typical of the grammar of words. Expected final online publication date for the Annual Review of Linguistics, Volume 8 is January 2022. Please see http://www.annualreviews.org/page/journal/pubdates for revised estimates.
{"title":"Advances in Morphological Theory: Construction Morphology and Relational Morphology","authors":"J. Audring","doi":"10.1146/annurev-linguistics-031120-115118","DOIUrl":"https://doi.org/10.1146/annurev-linguistics-031120-115118","url":null,"abstract":"In recent years, construction-based approaches to morphology have gained ground in the research community. This framework is characterized by the assumption that the mental lexicon is extensive and richly structured, containing not only a large number of stored words but also a wide variety of generalizations in the form of schemas. This review explores two construction-based theories, Construction Morphology and Relational Morphology. After outlining the basic theoretical architecture, the article presents an array of recent applications of a construction-based approach to morphological phenomena in various languages. In addition, it offers reflections on challenges and opportunities for further research. The review highlights those aspects of the theory that have proved particularly helpful in accommodating both the regularities and the quirks that are typical of the grammar of words. Expected final online publication date for the Annual Review of Linguistics, Volume 8 is January 2022. Please see http://www.annualreviews.org/page/journal/pubdates for revised estimates.","PeriodicalId":45803,"journal":{"name":"Annual Review of Linguistics","volume":"18 1","pages":""},"PeriodicalIF":3.2,"publicationDate":"2021-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77760618","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-09-17DOI: 10.1146/annurev-linguistics-032620-045855
Sarah Thomason
My career falls into two distinct periods. The first two decades featured insecurity combined with the luck of wandering into situations that ultimately helped me become a better linguist and a better teacher. I had the insecurity mostly under control by the watershed year of 1988, when I published a favorably reviewed coauthored book on language contact and also became editor of Language. Language contact has occupied most of my research time since then, but my first encounter with Séliš-Ql’ispé (a.k.a. Montana Salish), in 1981, led to a 40-year dedication to finding out more about the language and its history. Expected final online publication date for the Annual Review of Linguistics, Volume 8 is January 2022. Please see http://www.annualreviews.org/page/journal/pubdates for revised estimates.
{"title":"How I Got Here and Where I’m Going Next","authors":"Sarah Thomason","doi":"10.1146/annurev-linguistics-032620-045855","DOIUrl":"https://doi.org/10.1146/annurev-linguistics-032620-045855","url":null,"abstract":"My career falls into two distinct periods. The first two decades featured insecurity combined with the luck of wandering into situations that ultimately helped me become a better linguist and a better teacher. I had the insecurity mostly under control by the watershed year of 1988, when I published a favorably reviewed coauthored book on language contact and also became editor of Language. Language contact has occupied most of my research time since then, but my first encounter with Séliš-Ql’ispé (a.k.a. Montana Salish), in 1981, led to a 40-year dedication to finding out more about the language and its history. Expected final online publication date for the Annual Review of Linguistics, Volume 8 is January 2022. Please see http://www.annualreviews.org/page/journal/pubdates for revised estimates.","PeriodicalId":45803,"journal":{"name":"Annual Review of Linguistics","volume":"24 1","pages":""},"PeriodicalIF":3.2,"publicationDate":"2021-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83814291","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}