L. Barceló-Coblijn, A. Benítez‐Burraco, Aritz Irurtzun
Usually, developmental language disorders are defined either symptomatically (based on a constellation of linguistic deficits appearing recurrently within a population) or etiologically (on the basis of a common underlying deficit), or both. On paper, each of these clinical categories is expected to be distinguished from other close entities at several levels of analysis (phenotypic, cognitive, neurobiological, genetic, etc.). Nonetheless, this is not typically the case: Comorbidity, variability, and heterogeneity are in fact a common outcome of the clinical practice. Ultimately, different disorders may share the same underlying deficit (e.g., phonological dysfunction in dyslexia and SLI); conversely, different deficits may give rise to the same disorder (e.g., both visual problems and phonological deficits may contribute to dyslexia) (Benitez-Burraco 2013). If we want to achieve a better—and earlier—diagnosis of these conditions, we should improve the tools we employ at present. A promising approach is one relying on the endophenotypes of disorders. Endophenotypes may be defined as cognitive, neuroanatomical, neurophysiological, endocrine, or biochemical quantifiable components of the space between genes and diseases (Gould & Gottesman 2006). Endophenotypes refer to more specific (and more physiological) aspects of the body function, therefore they allow us to gain a more accurate diagnosis of its dysfunction (Gottesman & Gould 2003). Here we would like to advance a putative endophenotype of language disorders that combines four factors: (1) linguistic analysis (syntactic computation), (2) information management (communicative strategies), (3) recent evo-devo insights in the nature of phenotypic variation, and (4) network approaches to emergent properties of complex systems (surely, language it is; Deacon 2005). To begin with, we would like to note that, although the set of pathological conditions already described by clinical linguists is ample, it is not unlimited either. In other words, variation is constrained or canalized, even in pathological states. At the same time, we observe that language is both sensitive to damage (e.g., some aspects of language processing are perturbed in nearly all disorders, like the proper use of inflectional cues in verbal and nominal morphology) and resistant to perturbation (e.g., a nearly functional language faculty may emerge at the term of growth in spite of severe underlying deficits).
{"title":"Syntactic Networks as an Endophenotype of Developmental Language Disorders: An Evo-Devo Approach to Clinical Linguistics","authors":"L. Barceló-Coblijn, A. Benítez‐Burraco, Aritz Irurtzun","doi":"10.5964/bioling.9037","DOIUrl":"https://doi.org/10.5964/bioling.9037","url":null,"abstract":"Usually, developmental language disorders are defined either symptomatically (based on a constellation of linguistic deficits appearing recurrently within a population) or etiologically (on the basis of a common underlying deficit), or both. On paper, each of these clinical categories is expected to be distinguished from other close entities at several levels of analysis (phenotypic, cognitive, neurobiological, genetic, etc.). Nonetheless, this is not typically the case: Comorbidity, variability, and heterogeneity are in fact a common outcome of the clinical practice. Ultimately, different disorders may share the same underlying deficit (e.g., phonological dysfunction in dyslexia and SLI); conversely, different deficits may give rise to the same disorder (e.g., both visual problems and phonological deficits may contribute to dyslexia) (Benitez-Burraco 2013). If we want to achieve a better—and earlier—diagnosis of these conditions, we should improve the tools we employ at present. A promising approach is one relying on the endophenotypes of disorders. Endophenotypes may be defined as cognitive, neuroanatomical, neurophysiological, endocrine, or biochemical quantifiable components of the space between genes and diseases (Gould & Gottesman 2006). Endophenotypes refer to more specific (and more physiological) aspects of the body function, therefore they allow us to gain a more accurate diagnosis of its dysfunction (Gottesman & Gould 2003). Here we would like to advance a putative endophenotype of language disorders that combines four factors: (1) linguistic analysis (syntactic computation), (2) information management (communicative strategies), (3) recent evo-devo insights in the nature of phenotypic variation, and (4) network approaches to emergent properties of complex systems (surely, language it is; Deacon 2005). To begin with, we would like to note that, although the set of pathological conditions already described by clinical linguists is ample, it is not unlimited either. In other words, variation is constrained or canalized, even in pathological states. At the same time, we observe that language is both sensitive to damage (e.g., some aspects of language processing are perturbed in nearly all disorders, like the proper use of inflectional cues in verbal and nominal morphology) and resistant to perturbation (e.g., a nearly functional language faculty may emerge at the term of growth in spite of severe underlying deficits).","PeriodicalId":54041,"journal":{"name":"Biolinguistics","volume":"1 1","pages":""},"PeriodicalIF":0.6,"publicationDate":"2015-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"71076199","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper tries to shed light on traditional and current observations that give support to the idea that language is subject to critical period effects. It is suggested that this idea is not adequately grounded on a view on language as a developmental phenomenon which motivates the suggestion of moving from the now classic concept of language as a ‘faculty’ to a new concept of language as a ‘gradient’: i.e. an aggregate of cognitive abilities, the weight of which is variable from one to another developmental stage, and which exercise crucial scaffolding effects on each other. Once this well-supported view is assumed, the idea of ‘critical period’ becomes an avoidable one, for language can instantiate different forms of gradation, none of which is inherently normal or deviant relatively to each other. In any event, a notion of ‘criticality’ is retained within this view, yet simply to name the transitional effects of scaffolding influences within the gradient.
{"title":"Should It Stay or Should It Go? A Critical Reflection on the Critical Period for Language","authors":"S. Balari, G. Lorenzo","doi":"10.5964/bioling.9027","DOIUrl":"https://doi.org/10.5964/bioling.9027","url":null,"abstract":"This paper tries to shed light on traditional and current observations that give support to the idea that language is subject to critical period effects. It is suggested that this idea is not adequately grounded on a view on language as a developmental phenomenon which motivates the suggestion of moving from the now classic concept of language as a ‘faculty’ to a new concept of language as a ‘gradient’: i.e. an aggregate of cognitive abilities, the weight of which is variable from one to another developmental stage, and which exercise crucial scaffolding effects on each other. Once this well-supported view is assumed, the idea of ‘critical period’ becomes an avoidable one, for language can instantiate different forms of gradation, none of which is inherently normal or deviant relatively to each other. In any event, a notion of ‘criticality’ is retained within this view, yet simply to name the transitional effects of scaffolding influences within the gradient.","PeriodicalId":54041,"journal":{"name":"Biolinguistics","volume":"17 1","pages":""},"PeriodicalIF":0.6,"publicationDate":"2015-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"71076540","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Un-Cartesian linguistics is a research program with the aim of rethinking the nature of grammar as a domain of scientific inquiry, raising new questions about the constitutive role of grammar in the organization of our (rational) minds and selves. It reformulates the ‘Cartesian’ foundations of the modern Universal Grammar project, shifting emphasis away from the study of a domain-specific ‘innate’ module separate from thought, to the study of a sapiens-specific mode of cognition conditioned by both grammatical and lexical organization, and thus a particular cognitive phenotype, which is uniquely also a linguistic one. The purpose of this position paper is to introduce and motivate this new concept in its various dimensions and in accessible terms, and to define the ‘Un-Cartesian Hypothesis’: that the grammaticalization of the hominin brain in the evolutionary transition to our species uniquely explains why our cognitive mode involves a capacity for thought in a propositional format.
{"title":"What Is Un-Cartesian Linguistics?","authors":"W. Hinzen","doi":"10.5964/bioling.8999","DOIUrl":"https://doi.org/10.5964/bioling.8999","url":null,"abstract":"Un-Cartesian linguistics is a research program with the aim of rethinking the nature of grammar as a domain of scientific inquiry, raising new questions about the constitutive role of grammar in the organization of our (rational) minds and selves. It reformulates the ‘Cartesian’ foundations of the modern Universal Grammar project, shifting emphasis away from the study of a domain-specific ‘innate’ module separate from thought, to the study of a sapiens-specific mode of cognition conditioned by both grammatical and lexical organization, and thus a particular cognitive phenotype, which is uniquely also a linguistic one. The purpose of this position paper is to introduce and motivate this new concept in its various dimensions and in accessible terms, and to define the ‘Un-Cartesian Hypothesis’: that the grammaticalization of the hominin brain in the evolutionary transition to our species uniquely explains why our cognitive mode involves a capacity for thought in a propositional format.","PeriodicalId":54041,"journal":{"name":"Biolinguistics","volume":"1 1","pages":""},"PeriodicalIF":0.6,"publicationDate":"2014-10-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"71075979","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The overall goal of this paper is to evaluate theories that attempt to address the organizing principles of language and review the development of these theories toward the integration of language within an interactive network of higher-level cognitive functions. Commencing with an overview of traditional concepts of language as modular, distinct, and innate, we focus firstly on areas that highlight the foundation of modularity theory including various module definitions and criteria, and applications of modularity in information processing and biological systems. We also discuss challenges to the overall applicability of a modular system and limitations of modular models in dealing with adaptation, novelty, innate versus learned, domain-general and domain-specific features, and developmental and age-related changes of cognitive organization. Prompted by the rapidly increasing amount of empirical data on the functional elements of the human brain, we then evaluate several major theories of cognition, including views that oppose modular organization and those that integrate modular and semi-modular views with topological modularity in simpler, and dynamic integration in higher-level cognitive functions.Within this framework, modular and non-modular components of linguistic knowledge, organizing principles of language viewed either as specific or derived from other systems, and concepts of language as one of the cognitive functions or the outcome of unique interactions among cognitive components are discussed. Emerging theories that integrate interactive network models support a cognitive architecture as a mosaic of domain-specific and domain-general processes involving both functional segregation and integration within a global neuronal workspace. Within this anatomically distributed workspace, the language function represents unique interactions among cognitive components consistent with an organization that is task-dependent with a continuum between degrees of modular and shared processing. As a higher-level, learning-based, and effortful cognitive process language transiently enlists a less modular organization for an efficient network configuration in interaction with several cognitive systems and the domain-general cognitive control/multiple-demand network.
{"title":"Structural and Functional Organizing Principles of Language: Evolving Theories","authors":"Ádám Szalontai, K. Csiszȧr","doi":"10.5964/bioling.9009","DOIUrl":"https://doi.org/10.5964/bioling.9009","url":null,"abstract":"The overall goal of this paper is to evaluate theories that attempt to address the organizing principles of language and review the development of these theories toward the integration of language within an interactive network of higher-level cognitive functions. Commencing with an overview of traditional concepts of language as modular, distinct, and innate, we focus firstly on areas that highlight the foundation of modularity theory including various module definitions and criteria, and applications of modularity in information processing and biological systems. We also discuss challenges to the overall applicability of a modular system and limitations of modular models in dealing with adaptation, novelty, innate versus learned, domain-general and domain-specific features, and developmental and age-related changes of cognitive organization. Prompted by the rapidly increasing amount of empirical data on the functional elements of the human brain, we then evaluate several major theories of cognition, including views that oppose modular organization and those that integrate modular and semi-modular views with topological modularity in simpler, and dynamic integration in higher-level cognitive functions.Within this framework, modular and non-modular components of linguistic knowledge, organizing principles of language viewed either as specific or derived from other systems, and concepts of language as one of the cognitive functions or the outcome of unique interactions among cognitive components are discussed. Emerging theories that integrate interactive network models support a cognitive architecture as a mosaic of domain-specific and domain-general processes involving both functional segregation and integration within a global neuronal workspace. Within this anatomically distributed workspace, the language function represents unique interactions among cognitive components consistent with an organization that is task-dependent with a continuum between degrees of modular and shared processing. As a higher-level, learning-based, and effortful cognitive process language transiently enlists a less modular organization for an efficient network configuration in interaction with several cognitive systems and the domain-general cognitive control/multiple-demand network.","PeriodicalId":54041,"journal":{"name":"Biolinguistics","volume":"1 1","pages":""},"PeriodicalIF":0.6,"publicationDate":"2014-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"71075863","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
OK, it’s January the 23rd, we are at the MIT in Cambridge, Massachusetts. This will be a 60 minute interview with Noam Chomsky on the sixty to sixty-five years of his work, and we will try to cover as many topics as possible. To start off with this, I should put this into a context. I first started to interview Noam Chomsky about this [i.e. the history of generative grammar] two and a half years ago, right here at the MIT, and inadvertently, this grew into a whole series, and today’s interview is meant to be the end of the series, but not, hopefully, the end of our talks. [Both laugh.] Well, as I see it, and that’s a central part of the research project on your work I’m working on, there are, among many others, several red threads that run through your work, and that would be, first, the quest for simplicity in scientific description, and as we will see, that has several aspects, then the question of abstractness, which we will see in comparison to what went on before and what you started to work with. A closely related question that came to the forefront later was locality, local relations in mental computations. Fourth, the question of biolinguistics, meaning that language can be, and is seen by you, as a biological object in the final analysis, and also, that would be the fifth point, everything you did has always been developed in close collaboration with other people. So it’s not, we are not simply talking about the work of Noam Chomsky, but it’s a collaborative effort. Starting in 1946, I remember from my previous interviews that that is actually the period when you got to know who would become your teacher later on, Zellig Harris. And one of the first things you did was to read the galleys for his best-known work, Methods in Structural Linguistics (Harris 1951). There is another anecdote that I just saw in the morning, when for the first time I saw that Barcelona—I think it was in Spain somewhere in November—talk,1 when you said that another motive, apart from meeting Harris, for going into linguistics, was that you discovered that the Bible, the first words of the Bible had been mistranslated. Can you—maybe that’s a good point to start.
{"title":"The Interesting Part Is What Is Not Conscious: An Interview with Noam Chomsky","authors":"M. Schiffmann","doi":"10.5964/bioling.9011","DOIUrl":"https://doi.org/10.5964/bioling.9011","url":null,"abstract":"OK, it’s January the 23rd, we are at the MIT in Cambridge, Massachusetts. This will be a 60 minute interview with Noam Chomsky on the sixty to sixty-five years of his work, and we will try to cover as many topics as possible. To start off with this, I should put this into a context. I first started to interview Noam Chomsky about this [i.e. the history of generative grammar] two and a half years ago, right here at the MIT, and inadvertently, this grew into a whole series, and today’s interview is meant to be the end of the series, but not, hopefully, the end of our talks. [Both laugh.] Well, as I see it, and that’s a central part of the research project on your work I’m working on, there are, among many others, several red threads that run through your work, and that would be, first, the quest for simplicity in scientific description, and as we will see, that has several aspects, then the question of abstractness, which we will see in comparison to what went on before and what you started to work with. A closely related question that came to the forefront later was locality, local relations in mental computations. Fourth, the question of biolinguistics, meaning that language can be, and is seen by you, as a biological object in the final analysis, and also, that would be the fifth point, everything you did has always been developed in close collaboration with other people. So it’s not, we are not simply talking about the work of Noam Chomsky, but it’s a collaborative effort. Starting in 1946, I remember from my previous interviews that that is actually the period when you got to know who would become your teacher later on, Zellig Harris. And one of the first things you did was to read the galleys for his best-known work, Methods in Structural Linguistics (Harris 1951). There is another anecdote that I just saw in the morning, when for the first time I saw that Barcelona—I think it was in Spain somewhere in November—talk,1 when you said that another motive, apart from meeting Harris, for going into linguistics, was that you discovered that the Bible, the first words of the Bible had been mistranslated. Can you—maybe that’s a good point to start.","PeriodicalId":54041,"journal":{"name":"Biolinguistics","volume":"1 1","pages":""},"PeriodicalIF":0.6,"publicationDate":"2014-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"71075912","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Recent artificial-grammar learning (AGL) paradigms driven by the Chomsky hierarchy paved the way for direct comparisons between humans and animals in the learning of center embedding ([A[AB]B]). The AnBn grammars used by the first generation of such research lacked a crucial property of center embedding, where the pairs of elements are explicitly matched ([A1 [A2 B2] B1]). This type of indexing is implemented in the second-generation AnBn grammars. This paper reviews recent studies using such grammars. Against the premises of these studies, we argue that even those newer AnBn grammars cannot test the learning of syntactic hierarchy. These studies nonetheless provide detailed information about the conditions under which human adults can learn an AnBn grammar with indexing. This knowledge serves to interpret recent animal studies, which make surprising claims about animals’ ability to handle center embedding.
{"title":"The Non-Hierarchical Nature of the Chomsky Hierarchy-Driven Artificial-Grammar Learning","authors":"S. Ojima, K. Okanoya","doi":"10.5964/bioling.8997","DOIUrl":"https://doi.org/10.5964/bioling.8997","url":null,"abstract":"Recent artificial-grammar learning (AGL) paradigms driven by the Chomsky hierarchy paved the way for direct comparisons between humans and animals in the learning of center embedding ([A[AB]B]). The AnBn grammars used by the first generation of such research lacked a crucial property of center embedding, where the pairs of elements are explicitly matched ([A1 [A2 B2] B1]). This type of indexing is implemented in the second-generation AnBn grammars. This paper reviews recent studies using such grammars. Against the premises of these studies, we argue that even those newer AnBn grammars cannot test the learning of syntactic hierarchy. These studies nonetheless provide detailed information about the conditions under which human adults can learn an AnBn grammar with indexing. This knowledge serves to interpret recent animal studies, which make surprising claims about animals’ ability to handle center embedding.","PeriodicalId":54041,"journal":{"name":"Biolinguistics","volume":"1 1","pages":""},"PeriodicalIF":0.6,"publicationDate":"2014-05-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"71075345","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This book, part of the Oxford Studies in Biolinguistic series, presents a state-ofthe-art overview of the field, more specifically, on psycho- and neurolinguistics and their relation to models of syntax, semantics, and morpho-phonology, while advancing its limits with cutting-edge research. A distinctive feature of the piece is the strong presence of interdisciplinary work and the internal coherence of the volume, integrating computational science, cognitive science, neurology and psycholinguistics, as well as syntax, semantics, and morpho-phonology; an integration that is most welcomed as it triggers debate and productive revisiting of the machinery assumed within all aforementioned sub-disciplines of linguistics. The volume is organized around the notion of garden path sentences, relative clauses, and their relations at the processing level; this includes major problems of natural language processing and the relations between syntax, semantics, and morpho-phonology from a more general point of view as well. The editors have chosen to open the book with a reprinted article by Thomas Bever, from 1970 (which becomes a recurrent motif to which the contributors refer once and again as a departing point, thus giving structural and thematic unity and coherence to the book as a whole), a locus classicus for the psycholinguistic and neurocognitive approaches to ambiguity resolution, parsing (sentence perception, at the moment) strategies, and so-called ‘garden path sentences’ (GPS), the best known example being The horse raced past the barn fell, even if, as Tanenhaus claims in the Afterword, none of those is the prime theme of the work (but it is mostly about the relation between language and general cognitive strategies, an early plea for holism). The opening seems appropriate, since it provides the reader with an overall perspective on the studies of language as a concept analogous to those of “species or organ, as they are used in biological science” (p. 2). The article makes a case of distinguishing language as a mental/biological entity from language as a behavior; but, crucially, language structure and development are not to be isolated from the development of other cognitive capacities. Choosing this particular article is a statement in itself: Perceptual mechanisms, cognitive structures (including counting and number approximation, visual patterns and 2-D/3-D illusions), and linguistic structures (grammatical role assignment, abstraction of a structural pattern like ‘active’ or
{"title":"At the Interface of (Bio)linguistics, Language Processing, and Neuropsychology","authors":"I. Laka, M. Tanenhaus, D. Krivochen","doi":"10.5964/bioling.9007","DOIUrl":"https://doi.org/10.5964/bioling.9007","url":null,"abstract":"This book, part of the Oxford Studies in Biolinguistic series, presents a state-ofthe-art overview of the field, more specifically, on psycho- and neurolinguistics and their relation to models of syntax, semantics, and morpho-phonology, while advancing its limits with cutting-edge research. A distinctive feature of the piece is the strong presence of interdisciplinary work and the internal coherence of the volume, integrating computational science, cognitive science, neurology and psycholinguistics, as well as syntax, semantics, and morpho-phonology; an integration that is most welcomed as it triggers debate and productive revisiting of the machinery assumed within all aforementioned sub-disciplines of linguistics. The volume is organized around the notion of garden path sentences, relative clauses, and their relations at the processing level; this includes major problems of natural language processing and the relations between syntax, semantics, and morpho-phonology from a more general point of view as well. The editors have chosen to open the book with a reprinted article by Thomas Bever, from 1970 (which becomes a recurrent motif to which the contributors refer once and again as a departing point, thus giving structural and thematic unity and coherence to the book as a whole), a locus classicus for the psycholinguistic and neurocognitive approaches to ambiguity resolution, parsing (sentence perception, at the moment) strategies, and so-called ‘garden path sentences’ (GPS), the best known example being The horse raced past the barn fell, even if, as Tanenhaus claims in the Afterword, none of those is the prime theme of the work (but it is mostly about the relation between language and general cognitive strategies, an early plea for holism). The opening seems appropriate, since it provides the reader with an overall perspective on the studies of language as a concept analogous to those of “species or organ, as they are used in biological science” (p. 2). The article makes a case of distinguishing language as a mental/biological entity from language as a behavior; but, crucially, language structure and development are not to be isolated from the development of other cognitive capacities. Choosing this particular article is a statement in itself: Perceptual mechanisms, cognitive structures (including counting and number approximation, visual patterns and 2-D/3-D illusions), and linguistic structures (grammatical role assignment, abstraction of a structural pattern like ‘active’ or","PeriodicalId":54041,"journal":{"name":"Biolinguistics","volume":"100 1","pages":""},"PeriodicalIF":0.6,"publicationDate":"2014-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"71075815","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper is aimed at clarifying one particular aspect of Derek Bickerton’s recent contribution to Biolinguistics (Bickerton 2014a), where he contends that biolinguists tend to emphasize the specifics of certain non-standard evolutionary models in order to prejudicially avoid the theory of natural selection. According to Bickerton (2014a: 78), “they [biolinguists] have problems with the notion of natural selection, up to and including a total failure to comprehend what is and how it works”. This is the most understandable, also according to Bickerton, because even evolutionary psychologists and philosophers like Pinker and Dennett, who have devoted well-known papers and books to explaining and applying natural selection to the case of cognition and language, have failed to understand the real import of Darwin’s idea: “Natural selection could not ‘explain’ complex design”, claims Bickerton (2014a: 79), “even if Pinker & Bloom (1990), Dennett (1995), and others who are not biologists think it does. In fact, natural selection does not provide a single one of the factors that go into creating design”. Bickerton’s comments in the Biolinguistics piece are specifically targeted at the model of ‘self-organization’ associated to complexity sciences, which is introduced in Longa (2001) as potentially capable of dealing with some recalcitrant problems of the evolution of language. Bickerton (2014a: 79) writes that Longa’s attacks point to “a straw man”, and that his claim that self-organization is an alternative to natural selection is “a category mistake”, for selforganization is simply one of the factors that generates the variation that natural selection selects from. So, according to Bickerton, natural selection and self-organization must be conceptualized as two complementary mechanisms that operate in a coordinated manner to bring about complex biological designs. In this response we want to explain that this is a wrong conclusion supported on wrong premises. For that purpose, we first document that biologists generally agree on the idea that natural selection creates design; second,
{"title":"Self-Organization and Natural Selection: The Intelligent Auntie’s Vade-Mecum","authors":"V. M. Longa, G. Lorenzo","doi":"10.5964/bioling.9013","DOIUrl":"https://doi.org/10.5964/bioling.9013","url":null,"abstract":"This paper is aimed at clarifying one particular aspect of Derek Bickerton’s recent contribution to Biolinguistics (Bickerton 2014a), where he contends that biolinguists tend to emphasize the specifics of certain non-standard evolutionary models in order to prejudicially avoid the theory of natural selection. According to Bickerton (2014a: 78), “they [biolinguists] have problems with the notion of natural selection, up to and including a total failure to comprehend what is and how it works”. This is the most understandable, also according to Bickerton, because even evolutionary psychologists and philosophers like Pinker and Dennett, who have devoted well-known papers and books to explaining and applying natural selection to the case of cognition and language, have failed to understand the real import of Darwin’s idea: “Natural selection could not ‘explain’ complex design”, claims Bickerton (2014a: 79), “even if Pinker & Bloom (1990), Dennett (1995), and others who are not biologists think it does. In fact, natural selection does not provide a single one of the factors that go into creating design”. Bickerton’s comments in the Biolinguistics piece are specifically targeted at the model of ‘self-organization’ associated to complexity sciences, which is introduced in Longa (2001) as potentially capable of dealing with some recalcitrant problems of the evolution of language. Bickerton (2014a: 79) writes that Longa’s attacks point to “a straw man”, and that his claim that self-organization is an alternative to natural selection is “a category mistake”, for selforganization is simply one of the factors that generates the variation that natural selection selects from. So, according to Bickerton, natural selection and self-organization must be conceptualized as two complementary mechanisms that operate in a coordinated manner to bring about complex biological designs. In this response we want to explain that this is a wrong conclusion supported on wrong premises. For that purpose, we first document that biologists generally agree on the idea that natural selection creates design; second,","PeriodicalId":54041,"journal":{"name":"Biolinguistics","volume":"1 1","pages":""},"PeriodicalIF":0.6,"publicationDate":"2014-04-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"71076021","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Several theoretical proposals for the evolution of language have sparked a renewed search for comparative data on human and non-human animal computational capacities. However, conceptual confusions still hinder the field, leading to experimental evidence that fails to test for comparable human competences. Here we focus on two conceptual and methodological challenges that affect the field generally: 1) properly characterizing the computational features of the faculty of language in the narrow sense; 2) defining and probing for human language-like computations via artificial language learning experiments in non-human animals. Our intent is to be critical in the service of clarity, in what we agree is an important approach to understanding how language evolved.
{"title":"Conceptual and Methodological Problems with Comparative Work on Artificial Language Learning","authors":"J. Watumull, M. Hauser, R. Berwick","doi":"10.5964/bioling.8995","DOIUrl":"https://doi.org/10.5964/bioling.8995","url":null,"abstract":"Several theoretical proposals for the evolution of language have sparked a renewed search for comparative data on human and non-human animal computational capacities. However, conceptual confusions still hinder the field, leading to experimental evidence that fails to test for comparable human competences. Here we focus on two conceptual and methodological challenges that affect the field generally: 1) properly characterizing the computational features of the faculty of language in the narrow sense; 2) defining and probing for human language-like computations via artificial language learning experiments in non-human animals. Our intent is to be critical in the service of clarity, in what we agree is an important approach to understanding how language evolved.","PeriodicalId":54041,"journal":{"name":"Biolinguistics","volume":"1 1","pages":""},"PeriodicalIF":0.6,"publicationDate":"2014-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"71075239","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}