Adaboost

Jan Žižka, F. Dařena, Arnošt Svoboda
{"title":"Adaboost","authors":"Jan Žižka, F. Dařena, Arnošt Svoboda","doi":"10.1201/9780429469275-9","DOIUrl":null,"url":null,"abstract":"Let’s now look at the AdaBoost setup in more detail. • Loss ` = exp (exponential loss). It is also common to use the logistic loss ln(1 + exp(·)), but for simplicity we’ll use the standard choice. • Examples ((xi, yi))i=1 with xi ∈ X and yi ∈ {−1,+1}. The main thing to note is that X is just some opaque set, we are not assuming vector space structure, and can not form inner products 〈w, x〉. • Elementary hypotheses H = (hj)j=1, where hj : X → [−1,+1] for each j. Rather than interacting with examples in X directly, boosting algorithms embed them in a vector space via these functions H. For example, a vector v ∈ R is now interpreted as a linear combination of elements of H, and predictions on a new example x ∈ X are computed as x 7→ ∑","PeriodicalId":258194,"journal":{"name":"Text Mining with Machine Learning","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"29","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Text Mining with Machine Learning","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1201/9780429469275-9","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 29

Abstract

Let’s now look at the AdaBoost setup in more detail. • Loss ` = exp (exponential loss). It is also common to use the logistic loss ln(1 + exp(·)), but for simplicity we’ll use the standard choice. • Examples ((xi, yi))i=1 with xi ∈ X and yi ∈ {−1,+1}. The main thing to note is that X is just some opaque set, we are not assuming vector space structure, and can not form inner products 〈w, x〉. • Elementary hypotheses H = (hj)j=1, where hj : X → [−1,+1] for each j. Rather than interacting with examples in X directly, boosting algorithms embed them in a vector space via these functions H. For example, a vector v ∈ R is now interpreted as a linear combination of elements of H, and predictions on a new example x ∈ X are computed as x 7→ ∑
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
演算法
现在让我们更详细地看看AdaBoost的设置。•损失= exp(指数损失)。逻辑损失ln(1 + exp(·))也很常见,但为了简单起见,我们将使用标准选择。•示例((xi, yi))i=1,其中xi∈X, yi∈{−1,+1}。主要需要注意的是,X只是一个不透明的集合,我们没有假设向量空间结构,也不能形成内积< w, X >。•基本假设H = (hj)j=1,其中每个j的hj: X→[−1,+1]。增强算法不是直接与X中的示例交互,而是通过这些函数H将它们嵌入到向量空间中。例如,向量v∈R现在被解释为H元素的线性组合,并且对新示例X∈X的预测计算为X 7→∑
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Classification Introduction to Text Mining with Machine Learning Bayes Classifier Word Embeddings Nearest Neighbors
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1