{"title":"基于Langevin-Neelakanta机器的新型深度学习人工神经网络","authors":"D. De Groff, P. Neelakanta","doi":"10.14738/tmlai.104.13100","DOIUrl":null,"url":null,"abstract":"In the contexts of deep learning (DL) considered in artificial intelligence (AI) efforts, relevant machine learning (ML) algorithms adopted refer to using a class of deep artificial neural network (ANN) that supports a learning process exercised with an enormous set of input data (labeled and/or unlabeled) so to predict at the output details on accurate features of labeled data present in the input data set. In the present study, a deep ANN is proposed thereof conceived with certain novel considerations: The proposed deep architecture consists of a large number of consequently placed structures of paired-layers. Each layer hosts identical number of neuronal units for computation and the neuronal units are massively interconnected across the entire network. Further, each paired-layer is independently subjected to unsupervised learning (USL). Hence, commencing from the input layer-pair, the excitatory (input) data supplied flows across the interconnected neurons of paired layers, terminating eventually at the final pair of layers, where the output is recovered. That is, the converged neuronal states at any given pair is iteratively passed on to the next pair and so on. The USL suite involves collectively gathering the details of neural information across a pair of the layers constituting the network. This summed data is then limited with a specific choice of a squashing (sigmoidal) function; and, the resulting scaled value is used to adjust the coefficients of interconnection weights seeking a convergence criterion. The associated learning rate on weight adjustment is uniquely designed to facilitate fast learning towards convergence. The unique aspects of deep learning proposed here refer to: (i) Deducing the learning coefficient with a compatible algorithm so as to realize a fast convergence; and, (ii) the adopted sigmoidal function in the USL loop conforms to the heuristics of the so-called Langevin-Neelakanta machine. The paper describes the proposed deep ANN architecture with necessary details on structural considerations, sigmoidal selection, prescribing required learning rate and operational (training and predictive phase) routines. Results are furnished to demonstrate the performance efficacy of the test ANN.","PeriodicalId":119801,"journal":{"name":"Transactions on Machine Learning and Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2022-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A Novel Deep Learning ANN Supported on Langevin-Neelakanta Machine\",\"authors\":\"D. De Groff, P. Neelakanta\",\"doi\":\"10.14738/tmlai.104.13100\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In the contexts of deep learning (DL) considered in artificial intelligence (AI) efforts, relevant machine learning (ML) algorithms adopted refer to using a class of deep artificial neural network (ANN) that supports a learning process exercised with an enormous set of input data (labeled and/or unlabeled) so to predict at the output details on accurate features of labeled data present in the input data set. In the present study, a deep ANN is proposed thereof conceived with certain novel considerations: The proposed deep architecture consists of a large number of consequently placed structures of paired-layers. Each layer hosts identical number of neuronal units for computation and the neuronal units are massively interconnected across the entire network. Further, each paired-layer is independently subjected to unsupervised learning (USL). Hence, commencing from the input layer-pair, the excitatory (input) data supplied flows across the interconnected neurons of paired layers, terminating eventually at the final pair of layers, where the output is recovered. That is, the converged neuronal states at any given pair is iteratively passed on to the next pair and so on. The USL suite involves collectively gathering the details of neural information across a pair of the layers constituting the network. This summed data is then limited with a specific choice of a squashing (sigmoidal) function; and, the resulting scaled value is used to adjust the coefficients of interconnection weights seeking a convergence criterion. The associated learning rate on weight adjustment is uniquely designed to facilitate fast learning towards convergence. The unique aspects of deep learning proposed here refer to: (i) Deducing the learning coefficient with a compatible algorithm so as to realize a fast convergence; and, (ii) the adopted sigmoidal function in the USL loop conforms to the heuristics of the so-called Langevin-Neelakanta machine. The paper describes the proposed deep ANN architecture with necessary details on structural considerations, sigmoidal selection, prescribing required learning rate and operational (training and predictive phase) routines. Results are furnished to demonstrate the performance efficacy of the test ANN.\",\"PeriodicalId\":119801,\"journal\":{\"name\":\"Transactions on Machine Learning and Artificial Intelligence\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-09-03\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Transactions on Machine Learning and Artificial Intelligence\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.14738/tmlai.104.13100\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Transactions on Machine Learning and Artificial Intelligence","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.14738/tmlai.104.13100","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
A Novel Deep Learning ANN Supported on Langevin-Neelakanta Machine
In the contexts of deep learning (DL) considered in artificial intelligence (AI) efforts, relevant machine learning (ML) algorithms adopted refer to using a class of deep artificial neural network (ANN) that supports a learning process exercised with an enormous set of input data (labeled and/or unlabeled) so to predict at the output details on accurate features of labeled data present in the input data set. In the present study, a deep ANN is proposed thereof conceived with certain novel considerations: The proposed deep architecture consists of a large number of consequently placed structures of paired-layers. Each layer hosts identical number of neuronal units for computation and the neuronal units are massively interconnected across the entire network. Further, each paired-layer is independently subjected to unsupervised learning (USL). Hence, commencing from the input layer-pair, the excitatory (input) data supplied flows across the interconnected neurons of paired layers, terminating eventually at the final pair of layers, where the output is recovered. That is, the converged neuronal states at any given pair is iteratively passed on to the next pair and so on. The USL suite involves collectively gathering the details of neural information across a pair of the layers constituting the network. This summed data is then limited with a specific choice of a squashing (sigmoidal) function; and, the resulting scaled value is used to adjust the coefficients of interconnection weights seeking a convergence criterion. The associated learning rate on weight adjustment is uniquely designed to facilitate fast learning towards convergence. The unique aspects of deep learning proposed here refer to: (i) Deducing the learning coefficient with a compatible algorithm so as to realize a fast convergence; and, (ii) the adopted sigmoidal function in the USL loop conforms to the heuristics of the so-called Langevin-Neelakanta machine. The paper describes the proposed deep ANN architecture with necessary details on structural considerations, sigmoidal selection, prescribing required learning rate and operational (training and predictive phase) routines. Results are furnished to demonstrate the performance efficacy of the test ANN.