Pub Date : 2020-10-29DOI: 10.5772/intechopen.94072
Wen Xu, Jing He, Yanfeng Shu
Transfer learning is an emerging technique in machine learning, by which we can solve a new task with the knowledge obtained from an old task in order to address the lack of labeled data. In particular deep domain adaptation (a branch of transfer learning) gets the most attention in recently published articles. The intuition behind this is that deep neural networks usually have a large capacity to learn representation from one dataset and part of the information can be further used for a new task. In this research, we firstly present the complete scenarios of transfer learning according to the domains and tasks. Secondly, we conduct a comprehensive survey related to deep domain adaptation and categorize the recent advances into three types based on implementing approaches: fine-tuning networks, adversarial domain adaptation, and sample-reconstruction approaches. Thirdly, we discuss the details of these methods and introduce some typical real-world applications. Finally, we conclude our work and explore some potential issues to be further addressed.
{"title":"Transfer Learning and Deep Domain Adaptation","authors":"Wen Xu, Jing He, Yanfeng Shu","doi":"10.5772/intechopen.94072","DOIUrl":"https://doi.org/10.5772/intechopen.94072","url":null,"abstract":"Transfer learning is an emerging technique in machine learning, by which we can solve a new task with the knowledge obtained from an old task in order to address the lack of labeled data. In particular deep domain adaptation (a branch of transfer learning) gets the most attention in recently published articles. The intuition behind this is that deep neural networks usually have a large capacity to learn representation from one dataset and part of the information can be further used for a new task. In this research, we firstly present the complete scenarios of transfer learning according to the domains and tasks. Secondly, we conduct a comprehensive survey related to deep domain adaptation and categorize the recent advances into three types based on implementing approaches: fine-tuning networks, adversarial domain adaptation, and sample-reconstruction approaches. Thirdly, we discuss the details of these methods and introduce some typical real-world applications. Finally, we conclude our work and explore some potential issues to be further addressed.","PeriodicalId":129871,"journal":{"name":"Advances and Applications in Deep Learning","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128750589","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-07-30DOI: 10.5772/intechopen.93289
Lujun Huang, Lei Xu, A. Miroshnichenko
Deep learning has become a vital approach to solving a big-data-driven problem. It has found tremendous applications in computer vision and natural language processing. More recently, deep learning has been widely used in optimising the performance of nanophotonic devices, where the conventional computational approach may require much computation time and significant computation source. In this chapter, we briefly review the recent progress of deep learning in nanophotonics. We overview the applications of the deep learning approach to optimising the various nanophotonic devices. It includes multilayer structures, plasmonic/dielectric metasurfaces and plasmonic chiral metamaterials. Also, nanophotonic can directly serve as an ideal platform to mimic optical neural networks based on nonlinear optical media, which in turn help to achieve high-performance photonic chips that may not be realised based on conventional design method.
{"title":"Deep Learning Enabled Nanophotonics","authors":"Lujun Huang, Lei Xu, A. Miroshnichenko","doi":"10.5772/intechopen.93289","DOIUrl":"https://doi.org/10.5772/intechopen.93289","url":null,"abstract":"Deep learning has become a vital approach to solving a big-data-driven problem. It has found tremendous applications in computer vision and natural language processing. More recently, deep learning has been widely used in optimising the performance of nanophotonic devices, where the conventional computational approach may require much computation time and significant computation source. In this chapter, we briefly review the recent progress of deep learning in nanophotonics. We overview the applications of the deep learning approach to optimising the various nanophotonic devices. It includes multilayer structures, plasmonic/dielectric metasurfaces and plasmonic chiral metamaterials. Also, nanophotonic can directly serve as an ideal platform to mimic optical neural networks based on nonlinear optical media, which in turn help to achieve high-performance photonic chips that may not be realised based on conventional design method.","PeriodicalId":129871,"journal":{"name":"Advances and Applications in Deep Learning","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126570914","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-06-25DOI: 10.5772/INTECHOPEN.92172
Evren Daglarli
The explainable artificial intelligence (xAI) is one of the interesting issues that has emerged recently. Many researchers are trying to deal with the subject with different dimensions and interesting results that have come out. However, we are still at the beginning of the way to understand these types of models. The forthcoming years are expected to be years in which the openness of deep learning models is discussed. In classical artificial intelligence approaches, we frequently encounter deep learning methods available today. These deep learning methods can yield highly effective results according to the data set size, data set quality, the methods used in feature extraction, the hyper parameter set used in deep learning models, the activation functions, and the optimization algorithms. However, there are important shortcomings that current deep learning models are currently inadequate. These artificial neural network-based models are black box models that generalize the data transmitted to it and learn from the data. Therefore, the relational link between input and output is not observable. This is an important open point in artificial neural networks and deep learning models. For these reasons, it is necessary to make serious efforts on the explainability and interpretability of black box models.
{"title":"Explainable Artificial Intelligence (xAI) Approaches and Deep Meta-Learning Models","authors":"Evren Daglarli","doi":"10.5772/INTECHOPEN.92172","DOIUrl":"https://doi.org/10.5772/INTECHOPEN.92172","url":null,"abstract":"The explainable artificial intelligence (xAI) is one of the interesting issues that has emerged recently. Many researchers are trying to deal with the subject with different dimensions and interesting results that have come out. However, we are still at the beginning of the way to understand these types of models. The forthcoming years are expected to be years in which the openness of deep learning models is discussed. In classical artificial intelligence approaches, we frequently encounter deep learning methods available today. These deep learning methods can yield highly effective results according to the data set size, data set quality, the methods used in feature extraction, the hyper parameter set used in deep learning models, the activation functions, and the optimization algorithms. However, there are important shortcomings that current deep learning models are currently inadequate. These artificial neural network-based models are black box models that generalize the data transmitted to it and learn from the data. Therefore, the relational link between input and output is not observable. This is an important open point in artificial neural networks and deep learning models. For these reasons, it is necessary to make serious efforts on the explainability and interpretability of black box models.","PeriodicalId":129871,"journal":{"name":"Advances and Applications in Deep Learning","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121992477","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}