{"title":"Adaboost","authors":"Jan Žižka, F. Dařena, Arnošt Svoboda","doi":"10.1201/9780429469275-9","DOIUrl":null,"url":null,"abstract":"Let’s now look at the AdaBoost setup in more detail. • Loss ` = exp (exponential loss). It is also common to use the logistic loss ln(1 + exp(·)), but for simplicity we’ll use the standard choice. • Examples ((xi, yi))i=1 with xi ∈ X and yi ∈ {−1,+1}. The main thing to note is that X is just some opaque set, we are not assuming vector space structure, and can not form inner products 〈w, x〉. • Elementary hypotheses H = (hj)j=1, where hj : X → [−1,+1] for each j. Rather than interacting with examples in X directly, boosting algorithms embed them in a vector space via these functions H. For example, a vector v ∈ R is now interpreted as a linear combination of elements of H, and predictions on a new example x ∈ X are computed as x 7→ ∑","PeriodicalId":258194,"journal":{"name":"Text Mining with Machine Learning","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"29","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Text Mining with Machine Learning","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1201/9780429469275-9","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 29
Abstract
Let’s now look at the AdaBoost setup in more detail. • Loss ` = exp (exponential loss). It is also common to use the logistic loss ln(1 + exp(·)), but for simplicity we’ll use the standard choice. • Examples ((xi, yi))i=1 with xi ∈ X and yi ∈ {−1,+1}. The main thing to note is that X is just some opaque set, we are not assuming vector space structure, and can not form inner products 〈w, x〉. • Elementary hypotheses H = (hj)j=1, where hj : X → [−1,+1] for each j. Rather than interacting with examples in X directly, boosting algorithms embed them in a vector space via these functions H. For example, a vector v ∈ R is now interpreted as a linear combination of elements of H, and predictions on a new example x ∈ X are computed as x 7→ ∑