{"title":"A MaxEnt learner for super-additive counting cumulativity","authors":"Seoyoung Kim","doi":"10.16995/glossa.5856","DOIUrl":null,"url":null,"abstract":"Whereas most previous studies on (super-)gang effects examined cases where two weaker constraints jointly beat another stronger constraint (Albright 2012; Shih 2017; Breiss and Albright 2020), this paper addresses gang effects that arise from multiple violations of a single constraint, which Jäger and Rosenbach (2006) referred to as counting cumulativity. The super-additive version of counting cumulativity is the focus of this paper; cases where multiple violations of a weaker constraint not only overpower a single violation of a stronger constraint, but also surpass the mere multiplication of the severity of its single violation. I report two natural language examples where a morphophonlogical alternation in a compound is suppressed by the existence of marked segments in a super-additive manner: laryngeally marked consonants in Korean compound tensification and nasals in Japanese Rendaku. Using these two test cases, this paper argues that these types of super-additivity cannot be entirely captured by the traditional MaxEnt grammar; instead, a modified MaxEnt model is proposed, in which the degree of penalty is scaled up by the number of violations, through a power function. This paper also provides a computational implementation of the proposed MaxEnt model which learns necessary parameters given quantitative language data. A series of learning simulations on Korean and Japanese show that the MaxEnt learner is able to detect super-additive constraints and find the appropriate exponent values for those constraints, correctly capturing the probability distributions in the input data.","PeriodicalId":46319,"journal":{"name":"Glossa-A Journal of General Linguistics","volume":"56 1","pages":""},"PeriodicalIF":0.9000,"publicationDate":"2022-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Glossa-A Journal of General Linguistics","FirstCategoryId":"98","ListUrlMain":"https://doi.org/10.16995/glossa.5856","RegionNum":2,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"0","JCRName":"LANGUAGE & LINGUISTICS","Score":null,"Total":0}
引用次数: 2
Abstract
Whereas most previous studies on (super-)gang effects examined cases where two weaker constraints jointly beat another stronger constraint (Albright 2012; Shih 2017; Breiss and Albright 2020), this paper addresses gang effects that arise from multiple violations of a single constraint, which Jäger and Rosenbach (2006) referred to as counting cumulativity. The super-additive version of counting cumulativity is the focus of this paper; cases where multiple violations of a weaker constraint not only overpower a single violation of a stronger constraint, but also surpass the mere multiplication of the severity of its single violation. I report two natural language examples where a morphophonlogical alternation in a compound is suppressed by the existence of marked segments in a super-additive manner: laryngeally marked consonants in Korean compound tensification and nasals in Japanese Rendaku. Using these two test cases, this paper argues that these types of super-additivity cannot be entirely captured by the traditional MaxEnt grammar; instead, a modified MaxEnt model is proposed, in which the degree of penalty is scaled up by the number of violations, through a power function. This paper also provides a computational implementation of the proposed MaxEnt model which learns necessary parameters given quantitative language data. A series of learning simulations on Korean and Japanese show that the MaxEnt learner is able to detect super-additive constraints and find the appropriate exponent values for those constraints, correctly capturing the probability distributions in the input data.