首页 > 最新文献

Principles of Artificial Neural Networks最新文献

英文 中文
The Madaline
Pub Date : 2019-03-01 DOI: 10.1142/9789811201233_0005
{"title":"The Madaline","authors":"","doi":"10.1142/9789811201233_0005","DOIUrl":"https://doi.org/10.1142/9789811201233_0005","url":null,"abstract":"","PeriodicalId":188131,"journal":{"name":"Principles of Artificial Neural Networks","volume":"435 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115606019","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
LAMSTAR Neural Networks LAMSTAR神经网络
Pub Date : 2019-03-01 DOI: 10.1142/9789811201233_0015
{"title":"LAMSTAR Neural Networks","authors":"","doi":"10.1142/9789811201233_0015","DOIUrl":"https://doi.org/10.1142/9789811201233_0015","url":null,"abstract":"","PeriodicalId":188131,"journal":{"name":"Principles of Artificial Neural Networks","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128922519","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep Learning Neural Networks: Principles and Scope 深度学习神经网络:原理和范围
Pub Date : 2019-03-01 DOI: 10.1142/9789811201233_0013
{"title":"Deep Learning Neural Networks: Principles and Scope","authors":"","doi":"10.1142/9789811201233_0013","DOIUrl":"https://doi.org/10.1142/9789811201233_0013","url":null,"abstract":"","PeriodicalId":188131,"journal":{"name":"Principles of Artificial Neural Networks","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127239128","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Perceptron 感知器
Pub Date : 2019-03-01 DOI: 10.1142/9789811201233_0004
Volker Tresp
The perceptron implements a binary classifier f : R D → {+1, −1} with a linear decision surface through the origin: f (x) = step(θ x). (1) where step(z) = 1 if z ≥ 0 −1 otherwise. Using the zero-one loss L(y, f (x)) = 0 if y = f (x) 1 otherwise, the empirical risk of the perceptron on training data S = 1. The problem with this is that R emp (θ) is not differentiable in θ, so we cannot do gradient descent to learn θ. To circumvent this, we use the modified empirical loss R emp (θ) = i∈(1,2,...,N) : yi =step θ T xi −y i θ T x i. (2) This just says that correctly classified examples don't incur any loss at all, while incorrectly classified examples contribute θ T x i , which is some sort of measure of confidence in the (incorrect) labeling. 1 We can now use gradient descent to learn θ. Starting from an arbitrary θ (0) , we update our parameter vector according to θ (t+1) = θ (t) − η∇ θ R| θ (t) , where η, called the learning rate, is a parameter of our choosing. The gradient of (2) is again a sum over the misclassified examples: ∇ θ R emp (θ) = 1 A slightly more principled way to look at this is to derive this modified risk from the hinge loss L(y, θ T x) = max 0, −y θ T x .
感知器实现了一个二元分类器f: R D→{+1,−1},其线性决策曲面通过原点:f (x) = step(θ x).(1)其中,如果z≥0−1,则step(z) = 1。使用0 - 1损失,如果y = f (x) 1,则L(y, f (x)) = 0,否则,感知器在训练数据上的经验风险S = 1。这里的问题是remp (θ)在θ下不可导,所以我们不能用梯度下降来学习θ。为了避免这种情况,我们使用修正的经验损失R emp (θ) = i∈(1,2,…,N): yi =步长θ T xi - yi θ T xi。(2)这只是说正确分类的示例根本不会产生任何损失,而错误分类的示例贡献θ T xi,这是对(不正确)标记的某种置信度度量。我们现在可以用梯度下降来学习θ。从任意一个θ(0)开始,我们根据θ (t+1) = θ (t)−η∇θ R| θ (t)来更新我们的参数向量,其中η称为学习率,是我们选择的一个参数。(2)的梯度也是对错误分类的例子求和:∇θ R emp (θ) = 1一个更有原则的方法是,从铰链损失L(y, θ T x) = max 0, - y θ T x推导出修正后的风险。
{"title":"The Perceptron","authors":"Volker Tresp","doi":"10.1142/9789811201233_0004","DOIUrl":"https://doi.org/10.1142/9789811201233_0004","url":null,"abstract":"The perceptron implements a binary classifier f : R D → {+1, −1} with a linear decision surface through the origin: f (x) = step(θ x). (1) where step(z) = 1 if z ≥ 0 −1 otherwise. Using the zero-one loss L(y, f (x)) = 0 if y = f (x) 1 otherwise, the empirical risk of the perceptron on training data S = 1. The problem with this is that R emp (θ) is not differentiable in θ, so we cannot do gradient descent to learn θ. To circumvent this, we use the modified empirical loss R emp (θ) = i∈(1,2,...,N) : yi =step θ T xi −y i θ T x i. (2) This just says that correctly classified examples don't incur any loss at all, while incorrectly classified examples contribute θ T x i , which is some sort of measure of confidence in the (incorrect) labeling. 1 We can now use gradient descent to learn θ. Starting from an arbitrary θ (0) , we update our parameter vector according to θ (t+1) = θ (t) − η∇ θ R| θ (t) , where η, called the learning rate, is a parameter of our choosing. The gradient of (2) is again a sum over the misclassified examples: ∇ θ R emp (θ) = 1 A slightly more principled way to look at this is to derive this modified risk from the hinge loss L(y, θ T x) = max 0, −y θ T x .","PeriodicalId":188131,"journal":{"name":"Principles of Artificial Neural Networks","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132263914","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 44
Performance of DLNN — Comparative Case Studies DLNN的性能-比较案例研究
Pub Date : 2019-03-01 DOI: 10.1142/9789811201233_0016
{"title":"Performance of DLNN — Comparative Case Studies","authors":"","doi":"10.1142/9789811201233_0016","DOIUrl":"https://doi.org/10.1142/9789811201233_0016","url":null,"abstract":"","PeriodicalId":188131,"journal":{"name":"Principles of Artificial Neural Networks","volume":"128 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134144132","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Counter Propagation 计数器传播
Pub Date : 2019-03-01 DOI: 10.1142/9789811201233_0008
{"title":"Counter Propagation","authors":"","doi":"10.1142/9789811201233_0008","DOIUrl":"https://doi.org/10.1142/9789811201233_0008","url":null,"abstract":"","PeriodicalId":188131,"journal":{"name":"Principles of Artificial Neural Networks","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129386715","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Back Propagation 反向传播
Pub Date : 2019-03-01 DOI: 10.1142/9789811201233_0006
Barak Oshri, Vincent Chen, Nish Khandwala, Yi Wen, TA Yi Wen
{"title":"Back Propagation","authors":"Barak Oshri, Vincent Chen, Nish Khandwala, Yi Wen, TA Yi Wen","doi":"10.1142/9789811201233_0006","DOIUrl":"https://doi.org/10.1142/9789811201233_0006","url":null,"abstract":"","PeriodicalId":188131,"journal":{"name":"Principles of Artificial Neural Networks","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122836414","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
BACK MATTER 回到问题
Pub Date : 2019-03-01 DOI: 10.1142/9789811201233_bmatter
{"title":"BACK MATTER","authors":"","doi":"10.1142/9789811201233_bmatter","DOIUrl":"https://doi.org/10.1142/9789811201233_bmatter","url":null,"abstract":"","PeriodicalId":188131,"journal":{"name":"Principles of Artificial Neural Networks","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126020070","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FRONT MATTER 前页
Pub Date : 2019-03-01 DOI: 10.1142/9789811201233_fmatter
{"title":"FRONT MATTER","authors":"","doi":"10.1142/9789811201233_fmatter","DOIUrl":"https://doi.org/10.1142/9789811201233_fmatter","url":null,"abstract":"","PeriodicalId":188131,"journal":{"name":"Principles of Artificial Neural Networks","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129791133","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep Learning Convolutional Neural Network 深度学习卷积神经网络
Pub Date : 2019-03-01 DOI: 10.1142/9789811201233_0014
{"title":"Deep Learning Convolutional Neural Network","authors":"","doi":"10.1142/9789811201233_0014","DOIUrl":"https://doi.org/10.1142/9789811201233_0014","url":null,"abstract":"","PeriodicalId":188131,"journal":{"name":"Principles of Artificial Neural Networks","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128589982","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
期刊
Principles of Artificial Neural Networks
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1