在CNN建筑中应用迁移学习

Aparna Gurjar, Preeti S. Voditel
{"title":"在CNN建筑中应用迁移学习","authors":"Aparna Gurjar, Preeti S. Voditel","doi":"10.47164/ijngc.v14i1.1052","DOIUrl":null,"url":null,"abstract":"Machine learning (ML) is a data intensive process. For training of ML algorithms huge datasets are required.There are times when enough data is not available due to multitude of reasons. This could be due to lack ofavailability of annotated data in a particular domain or paucity of time in data collection process resulting innon-availability of enough data. Many a times data collection is very expensive and in few domains data collectionis very difficult. In such cases, if methods can be designed to reuse the knowledge gained in one domain havingenough training data, to some other related domain having less training data, then problems associated with lackof data can be overcome. Transfer Learning (TL) is one such method. TL improves the performance of the targetdomain through knowledge transfer from some different but related source domain. This knowledge transfer canbe in form of feature extraction, domain adaptation, rule extraction for advice and so on. TL also works withvarious kinds of ML tasks related to supervised, unsupervised and reinforcement learning. The ConvolutionalNeural Networks are well suited for the TL approach. The general features learned on a base network (source)are shifted to the target network. The target network then uses its own data and learns new features specific toits requirement.","PeriodicalId":42021,"journal":{"name":"International Journal of Next-Generation Computing","volume":"9 1","pages":""},"PeriodicalIF":0.3000,"publicationDate":"2023-02-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Incorporating Transfer Learning in CNN Architecture\",\"authors\":\"Aparna Gurjar, Preeti S. Voditel\",\"doi\":\"10.47164/ijngc.v14i1.1052\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Machine learning (ML) is a data intensive process. For training of ML algorithms huge datasets are required.There are times when enough data is not available due to multitude of reasons. This could be due to lack ofavailability of annotated data in a particular domain or paucity of time in data collection process resulting innon-availability of enough data. Many a times data collection is very expensive and in few domains data collectionis very difficult. In such cases, if methods can be designed to reuse the knowledge gained in one domain havingenough training data, to some other related domain having less training data, then problems associated with lackof data can be overcome. Transfer Learning (TL) is one such method. TL improves the performance of the targetdomain through knowledge transfer from some different but related source domain. This knowledge transfer canbe in form of feature extraction, domain adaptation, rule extraction for advice and so on. TL also works withvarious kinds of ML tasks related to supervised, unsupervised and reinforcement learning. The ConvolutionalNeural Networks are well suited for the TL approach. The general features learned on a base network (source)are shifted to the target network. The target network then uses its own data and learns new features specific toits requirement.\",\"PeriodicalId\":42021,\"journal\":{\"name\":\"International Journal of Next-Generation Computing\",\"volume\":\"9 1\",\"pages\":\"\"},\"PeriodicalIF\":0.3000,\"publicationDate\":\"2023-02-15\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"International Journal of Next-Generation Computing\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.47164/ijngc.v14i1.1052\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Next-Generation Computing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.47164/ijngc.v14i1.1052","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

机器学习(ML)是一个数据密集型的过程。对于机器学习算法的训练,需要大量的数据集。有时由于多种原因而无法获得足够的数据。这可能是由于在特定领域缺乏可用的注释数据,或者在数据收集过程中缺乏时间,导致无法获得足够的数据。很多时候,数据收集是非常昂贵的,在少数领域的数据收集是非常困难的。在这种情况下,如果方法可以被设计成将在一个有足够训练数据的领域中获得的知识重用到具有较少训练数据的其他相关领域,那么与缺乏数据相关的问题就可以克服。迁移学习(TL)就是这样一种方法。TL通过从一些不同但相关的源领域转移知识来提高目标领域的性能。这种知识转移可以通过特征提取、领域自适应、建议规则提取等形式进行。TL还适用于与监督学习、无监督学习和强化学习相关的各种ML任务。卷积神经网络非常适合TL方法。在基本网络(源)上学习到的一般特征被转移到目标网络上。然后,目标网络使用自己的数据并学习特定于其需求的新特征。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Incorporating Transfer Learning in CNN Architecture
Machine learning (ML) is a data intensive process. For training of ML algorithms huge datasets are required.There are times when enough data is not available due to multitude of reasons. This could be due to lack ofavailability of annotated data in a particular domain or paucity of time in data collection process resulting innon-availability of enough data. Many a times data collection is very expensive and in few domains data collectionis very difficult. In such cases, if methods can be designed to reuse the knowledge gained in one domain havingenough training data, to some other related domain having less training data, then problems associated with lackof data can be overcome. Transfer Learning (TL) is one such method. TL improves the performance of the targetdomain through knowledge transfer from some different but related source domain. This knowledge transfer canbe in form of feature extraction, domain adaptation, rule extraction for advice and so on. TL also works withvarious kinds of ML tasks related to supervised, unsupervised and reinforcement learning. The ConvolutionalNeural Networks are well suited for the TL approach. The general features learned on a base network (source)are shifted to the target network. The target network then uses its own data and learns new features specific toits requirement.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
International Journal of Next-Generation Computing
International Journal of Next-Generation Computing COMPUTER SCIENCE, THEORY & METHODS-
自引率
66.70%
发文量
60
期刊最新文献
Integrating Smartphone Sensor Technology to Enhance Fine Motor and Working Memory Skills in Pediatric Obesity: A Gamified Approach Deep Learning based Semantic Segmentation for Buildings Detection from Remote Sensing Images Machine Learning-assisted Distance Based Residual Energy Aware Clustering Algorithm for Energy Efficient Information Dissemination in Urban VANETs High Utility Itemset Extraction using PSO with Online Control Parameter Calibration Alzheimer’s Disease Classification using Feature Enhanced Deep Convolutional Neural Networks
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1