首页 > 最新文献

Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN'94)最新文献

英文 中文
An adaptive recurrent neural network system for multi-step-ahead hourly prediction of power system loads 电力系统负荷多步超前小时预测的自适应递归神经网络系统
Pub Date : 1994-06-27 DOI: 10.1109/ICNN.1994.374781
A. Khotanzad, A. Abaye, D. Maratukulam
In this paper a new recurrent neural network (RNN) based system for hourly prediction of power system loads for up to two days ahead is developed. The system is a modular one consisting of 24 non-fully connected RNNs. Each RNN predicts the one and two-day-ahead load values of a particular hour of the day. The RNNs are trained with a backpropagation through time algorithm using a teacher forcing strategy. To handle non-stationarities, an adaptive scheme is used to adjust the RNN weights during the forecasting phase. The performance of the forecaster is tested on one year of real data from two utilities and the results are excellent. This recurrent system outperforms another modular feedforward NN-based forecaster which is in beta testing at several electric utilities.<>
本文提出了一种基于递归神经网络(RNN)的电力系统2天内负荷小时预测系统。该系统是由24个非完全连接rnn组成的模块化系统。每个RNN预测一天中某一特定小时的一天和两天前的负荷值。rnn使用教师强迫策略通过时间反向传播算法进行训练。为了处理非平稳性,在预测阶段采用自适应方法调整RNN的权值。在两家公用事业公司一年的实际数据上测试了该预测器的性能,结果很好。这种循环系统优于另一种基于模块化前馈神经网络的预测器,该预测器正在几家电力公司进行beta测试。
{"title":"An adaptive recurrent neural network system for multi-step-ahead hourly prediction of power system loads","authors":"A. Khotanzad, A. Abaye, D. Maratukulam","doi":"10.1109/ICNN.1994.374781","DOIUrl":"https://doi.org/10.1109/ICNN.1994.374781","url":null,"abstract":"In this paper a new recurrent neural network (RNN) based system for hourly prediction of power system loads for up to two days ahead is developed. The system is a modular one consisting of 24 non-fully connected RNNs. Each RNN predicts the one and two-day-ahead load values of a particular hour of the day. The RNNs are trained with a backpropagation through time algorithm using a teacher forcing strategy. To handle non-stationarities, an adaptive scheme is used to adjust the RNN weights during the forecasting phase. The performance of the forecaster is tested on one year of real data from two utilities and the results are excellent. This recurrent system outperforms another modular feedforward NN-based forecaster which is in beta testing at several electric utilities.<<ETX>>","PeriodicalId":209128,"journal":{"name":"Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN'94)","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116577239","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 23
An incremental concept formation approach to learn and discover from a clinical database 从临床数据库中学习和发现的增量概念形成方法
Pub Date : 1994-06-27 DOI: 10.1109/ICNN.1994.374706
V. Soo, Jan-Sing Wang, Shih-Pu Wang
The main interest of this research is to discover clinical implications from a large PTCA (Percutaneous Transluminal Coronary Angioplasty) database. A case-based concept formation model D-UNIMEM, modified from Lebowitz's UNIMEM, is proposed for this purpose. In this model, we integrated two kinds of class membership and the index-conjunction class membership. The former is a polythetic clustering approach that serves at the early stage of concept formation. The latter that allows only relevant instances to be placed in the same cluster serves as the later stage of concept formation. D-UNIMEM could extract interesting correlation among features from the learned concept hierarchy.<>
本研究的主要目的是从一个大的PTCA(经皮腔内冠状动脉成形术)数据库中发现临床意义。为此,提出了一种基于案例的概念形成模型D-UNIMEM,该模型在Lebowitz的UNIMEM基础上进行了改进。在该模型中,我们集成了两种类隶属度和索引关联类隶属度。前者是一种综合聚类方法,服务于概念形成的早期阶段。后者只允许将相关的实例放在同一集群中,作为概念形成的后期阶段。D-UNIMEM可以从学习的概念层次中提取特征之间有趣的相关性
{"title":"An incremental concept formation approach to learn and discover from a clinical database","authors":"V. Soo, Jan-Sing Wang, Shih-Pu Wang","doi":"10.1109/ICNN.1994.374706","DOIUrl":"https://doi.org/10.1109/ICNN.1994.374706","url":null,"abstract":"The main interest of this research is to discover clinical implications from a large PTCA (Percutaneous Transluminal Coronary Angioplasty) database. A case-based concept formation model D-UNIMEM, modified from Lebowitz's UNIMEM, is proposed for this purpose. In this model, we integrated two kinds of class membership and the index-conjunction class membership. The former is a polythetic clustering approach that serves at the early stage of concept formation. The latter that allows only relevant instances to be placed in the same cluster serves as the later stage of concept formation. D-UNIMEM could extract interesting correlation among features from the learned concept hierarchy.<<ETX>>","PeriodicalId":209128,"journal":{"name":"Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN'94)","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122723980","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A modular artificial neural network system for the classification and selection of coatings for a chemical sensor array 一种用于化学传感器阵列涂层分类选择的模块化人工神经网络系统
Pub Date : 1994-06-27 DOI: 10.1109/ICNN.1994.374772
G. Chu, ChengXin Cui, D. Stacey
An application in the area of chemical and biosensor design has provided the inspiration for research into some of the issues involved with the design and application of modular artificial neural networks (ANNs) for pattern classification tasks. We can divide the development of modular ANNs into two main components: (1) the topological design of the individual modular ANNs and the construction of the assembly of modules; and (2) the analysis of the data sets to be used to train the individual modules. The chemical sensor design task allows us to explore this second component to identify some of the implications for the capture and analysis of data appropriate for the training of modular ANN systems.<>
在化学和生物传感器设计领域的应用,为模式分类任务中模块化人工神经网络(ANNs)的设计和应用所涉及的一些问题的研究提供了灵感。模块化人工神经网络的开发可以分为两个主要部分:(1)单个模块化人工神经网络的拓扑设计和模块组装的构建;(2)分析用于训练各个模块的数据集。化学传感器设计任务使我们能够探索第二个组成部分,以确定适合模块化人工神经网络系统训练的数据的捕获和分析的一些含义。
{"title":"A modular artificial neural network system for the classification and selection of coatings for a chemical sensor array","authors":"G. Chu, ChengXin Cui, D. Stacey","doi":"10.1109/ICNN.1994.374772","DOIUrl":"https://doi.org/10.1109/ICNN.1994.374772","url":null,"abstract":"An application in the area of chemical and biosensor design has provided the inspiration for research into some of the issues involved with the design and application of modular artificial neural networks (ANNs) for pattern classification tasks. We can divide the development of modular ANNs into two main components: (1) the topological design of the individual modular ANNs and the construction of the assembly of modules; and (2) the analysis of the data sets to be used to train the individual modules. The chemical sensor design task allows us to explore this second component to identify some of the implications for the capture and analysis of data appropriate for the training of modular ANN systems.<<ETX>>","PeriodicalId":209128,"journal":{"name":"Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN'94)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122115866","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Aspects of information detection using entropy 利用熵的信息检测方面
Pub Date : 1994-06-27 DOI: 10.1109/ICNN.1994.374746
J. Mrsic-Flogel
An evolving learning system should be able to self-organise on its input vector continuously through time. This paper presents initial simulation results which show that entropy is a measure which could be employed to find various coding structure information by inspection of a binary input channel through time. It also shows that source information needs to be sparsely coded for entropy to be able to detect which code bitstring lengths are being employed to communicate source information to a self-organizing system.<>
一个不断进化的学习系统应该能够在其输入向量上连续地进行自组织。初步仿真结果表明,熵是一种度量,可以通过对二进制输入信道的时间检查来发现各种编码结构信息。它还表明,源信息需要稀疏编码,以便熵能够检测哪些码位串长度被用来将源信息传递给自组织系统
{"title":"Aspects of information detection using entropy","authors":"J. Mrsic-Flogel","doi":"10.1109/ICNN.1994.374746","DOIUrl":"https://doi.org/10.1109/ICNN.1994.374746","url":null,"abstract":"An evolving learning system should be able to self-organise on its input vector continuously through time. This paper presents initial simulation results which show that entropy is a measure which could be employed to find various coding structure information by inspection of a binary input channel through time. It also shows that source information needs to be sparsely coded for entropy to be able to detect which code bitstring lengths are being employed to communicate source information to a self-organizing system.<<ETX>>","PeriodicalId":209128,"journal":{"name":"Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN'94)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116739308","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Robot tracking in task space using neural networks 基于神经网络的任务空间机器人跟踪
Pub Date : 1994-06-27 DOI: 10.1109/ICNN.1994.374684
G. Feng, C. K. Chak
This paper considers tracking control of robots in task space. A new control scheme is proposed based on a kind of conventional controller and a neural network based compensating controller. This scheme takes advantages of simplicity of the model based control approach and uses the neural network controller to compensate for the robot modelling uncertainties. The neural network is trained online based on Lyapunov theory and thus its convergence is guaranteed.<>
研究了机器人在任务空间中的跟踪控制问题。提出了一种基于传统控制器和基于神经网络的补偿控制器的新型控制方案。该方案利用基于模型的控制方法的简单性,利用神经网络控制器对机器人建模的不确定性进行补偿。基于李雅普诺夫理论对神经网络进行在线训练,保证了神经网络的收敛性。
{"title":"Robot tracking in task space using neural networks","authors":"G. Feng, C. K. Chak","doi":"10.1109/ICNN.1994.374684","DOIUrl":"https://doi.org/10.1109/ICNN.1994.374684","url":null,"abstract":"This paper considers tracking control of robots in task space. A new control scheme is proposed based on a kind of conventional controller and a neural network based compensating controller. This scheme takes advantages of simplicity of the model based control approach and uses the neural network controller to compensate for the robot modelling uncertainties. The neural network is trained online based on Lyapunov theory and thus its convergence is guaranteed.<<ETX>>","PeriodicalId":209128,"journal":{"name":"Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN'94)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117040190","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 18
The best approximation to C/sup 2/ functions and its error bounds using regular-center Gaussian networks 正则中心高斯网络对C/sup 2/函数的最佳逼近及其误差界
Pub Date : 1994-06-27 DOI: 10.1109/ICNN.1994.374595
Binfan Liu, J. Si
Gaussian neural networks are considered to approximate any C/sup 2/ function with support on the unit hypercube I/sub m/=[0,1]/sup m/ in the sense of best approximation. An upper bound (0(N/sup -2/)) of the approximation error is obtained in the present paper for a Gaussian network having N/sup m/ hidden neurons with centers defined on a regular mesh in I/sub m/.<>
在最佳逼近意义上,高斯神经网络可以近似任何C/sup 2/函数,并支持单位超立方体I/sub m/=[0,1]/sup m/。本文给出了一个具有N/sup m/个隐藏神经元的高斯网络的近似误差的上界(0(N/sup -2/)),这些神经元的中心定义在I/ sup / /的规则网格上。
{"title":"The best approximation to C/sup 2/ functions and its error bounds using regular-center Gaussian networks","authors":"Binfan Liu, J. Si","doi":"10.1109/ICNN.1994.374595","DOIUrl":"https://doi.org/10.1109/ICNN.1994.374595","url":null,"abstract":"Gaussian neural networks are considered to approximate any C/sup 2/ function with support on the unit hypercube I/sub m/=[0,1]/sup m/ in the sense of best approximation. An upper bound (0(N/sup -2/)) of the approximation error is obtained in the present paper for a Gaussian network having N/sup m/ hidden neurons with centers defined on a regular mesh in I/sub m/.<<ETX>>","PeriodicalId":209128,"journal":{"name":"Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN'94)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117283756","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Information capacity and fault tolerance of binary weights Hopfield nets 二权Hopfield网络的信息容量与容错性
Pub Date : 1994-06-27 DOI: 10.1109/ICNN.1994.374327
A. Jagota, A. Negatu, D. Kaznachey
We define a measure for the fault-tolerance of binary weights Hopfield networks and relate it to a measure of information capacity. Using these measures, we compute results on the fault-tolerance and information capacity of certain Hopfield networks employing binary-valued weights. These Hopfield networks are governed by a single scalar parameter that controls their weights and biases. In one extreme value of this parameter, we show that the information capacity is optimal whereas the fault-tolerance is zero. At the other extreme, our results are inexact. We are only able to show that the information capacity is at least of the order of N log/sub 2/ N and N respectively, where N is the number of units. Our fault-tolerance results are even poorer, though nonzero. Nevertheless they do indicate a trade-off between information capacity and fault-tolerance as this parameter is varied from the first extreme to the second. We are also able to show that particular collections of patterns remain stable states as this parameter is varied, and fault-tolerance for them goes from zero at one extreme of this parameter to /spl Theta/(N/sup 2/) at the other extreme.<>
我们定义了二元权Hopfield网络的容错度量,并将其与信息容量度量联系起来。利用这些度量,我们计算了采用二值权的Hopfield网络的容错性和信息容量。这些Hopfield网络由单个标量参数控制其权重和偏差。在该参数的一个极值中,我们证明了信息容量是最优的,而容错性为零。在另一个极端,我们的结果是不精确的。我们只能证明信息容量至少分别为N log/sub 2/ N和N阶,其中N为单元数。我们的容错结果甚至更差,尽管不是零。尽管如此,它们确实表明了信息容量和容错性之间的权衡,因为这个参数从第一个极端到第二个极端是不同的。我们还能够证明,当该参数变化时,特定的模式集合保持稳定状态,并且它们的容错性从该参数的一个极端的零到另一个极端的/spl Theta/(N/sup 2/)。
{"title":"Information capacity and fault tolerance of binary weights Hopfield nets","authors":"A. Jagota, A. Negatu, D. Kaznachey","doi":"10.1109/ICNN.1994.374327","DOIUrl":"https://doi.org/10.1109/ICNN.1994.374327","url":null,"abstract":"We define a measure for the fault-tolerance of binary weights Hopfield networks and relate it to a measure of information capacity. Using these measures, we compute results on the fault-tolerance and information capacity of certain Hopfield networks employing binary-valued weights. These Hopfield networks are governed by a single scalar parameter that controls their weights and biases. In one extreme value of this parameter, we show that the information capacity is optimal whereas the fault-tolerance is zero. At the other extreme, our results are inexact. We are only able to show that the information capacity is at least of the order of N log/sub 2/ N and N respectively, where N is the number of units. Our fault-tolerance results are even poorer, though nonzero. Nevertheless they do indicate a trade-off between information capacity and fault-tolerance as this parameter is varied from the first extreme to the second. We are also able to show that particular collections of patterns remain stable states as this parameter is varied, and fault-tolerance for them goes from zero at one extreme of this parameter to /spl Theta/(N/sup 2/) at the other extreme.<<ETX>>","PeriodicalId":209128,"journal":{"name":"Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN'94)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129516381","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Phoneme recognition using a time-sliced recurrent recognizer 使用时间切片循环识别器的音素识别
Pub Date : 1994-06-27 DOI: 10.1109/ICNN.1994.374984
I. Kirschning, H. Tomabechi
This paper presents a new method for phoneme recognition using neural networks, the time-sliced recurrent recognizer (TSRR). In this method we employ Elman's recurrent network with error-backpropagation, adding an extra group of units that are trained to give a specific representation of each phoneme while it is recognizing it. The purpose of this architecture is to obtain an immediate hypothesis of the speech input without having to pre-label each phoneme or separate them before the input. The input signal is divided into time-slices which are recognized in a linear sequential fashion. The generated hypothesis is shown in the extra group of units at the same moment the time-slices are passed through the network and being recognized as a certain phoneme. Thus the TSRR is capable of recognizing the phonemes in real-time without discriminatory learning.<>
本文提出了一种利用神经网络进行音素识别的新方法——时间切片递归识别器(TSRR)。在这种方法中,我们采用Elman的带有错误反向传播的循环网络,增加了一组额外的单元,这些单元经过训练,在识别每个音素时给出一个特定的表示。这种架构的目的是获得语音输入的即时假设,而无需在输入之前预先标记每个音素或将它们分开。输入信号被分割成以线性顺序方式识别的时间片。生成的假设在额外的单元组中显示,同时时间片通过网络并被识别为某个音素。因此,TSRR能够实时识别音素,而不需要进行歧视性学习
{"title":"Phoneme recognition using a time-sliced recurrent recognizer","authors":"I. Kirschning, H. Tomabechi","doi":"10.1109/ICNN.1994.374984","DOIUrl":"https://doi.org/10.1109/ICNN.1994.374984","url":null,"abstract":"This paper presents a new method for phoneme recognition using neural networks, the time-sliced recurrent recognizer (TSRR). In this method we employ Elman's recurrent network with error-backpropagation, adding an extra group of units that are trained to give a specific representation of each phoneme while it is recognizing it. The purpose of this architecture is to obtain an immediate hypothesis of the speech input without having to pre-label each phoneme or separate them before the input. The input signal is divided into time-slices which are recognized in a linear sequential fashion. The generated hypothesis is shown in the extra group of units at the same moment the time-slices are passed through the network and being recognized as a certain phoneme. Thus the TSRR is capable of recognizing the phonemes in real-time without discriminatory learning.<<ETX>>","PeriodicalId":209128,"journal":{"name":"Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN'94)","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129606701","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Solving vehicle routing problems using elastic nets 利用弹性网解决车辆路线问题
Pub Date : 1994-06-27 DOI: 10.1109/ICNN.1994.375004
Andrew Vakhutinsky, B. Golden
Using neural networks to find an approximate solution to difficult optimization problems is a very attractive prospect. The traveling salesman problem (TSP), probably the best-known problem in combinatorial optimization, has been attacked by a variety of neural network approaches. The main purpose of this paper is to show how elastic network ideas can be applied to two TSP generalizations: the multiple traveling salesmen problem (MTSP) and the vehicle routing problem (VRP).<>
利用神经网络来寻找困难优化问题的近似解是一个非常有吸引力的前景。旅行商问题(TSP)可能是组合优化中最著名的问题,它已经被各种神经网络方法所攻击。本文的主要目的是展示弹性网络思想如何应用于两种TSP推广:多旅行推销员问题(MTSP)和车辆路线问题(VRP)。
{"title":"Solving vehicle routing problems using elastic nets","authors":"Andrew Vakhutinsky, B. Golden","doi":"10.1109/ICNN.1994.375004","DOIUrl":"https://doi.org/10.1109/ICNN.1994.375004","url":null,"abstract":"Using neural networks to find an approximate solution to difficult optimization problems is a very attractive prospect. The traveling salesman problem (TSP), probably the best-known problem in combinatorial optimization, has been attacked by a variety of neural network approaches. The main purpose of this paper is to show how elastic network ideas can be applied to two TSP generalizations: the multiple traveling salesmen problem (MTSP) and the vehicle routing problem (VRP).<<ETX>>","PeriodicalId":209128,"journal":{"name":"Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN'94)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129728481","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 56
Generalized autoregressive prediction with application to speech coding 广义自回归预测及其在语音编码中的应用
Pub Date : 1994-06-27 DOI: 10.1109/ICNN.1994.374287
Zhicheng Wang
Linear prediction is a major technique of signal processing and has been applied to many areas. Although nonlinear prediction has been investigated with some techniques such as multilayer backpropagation neural networks, the computational and storage expenses are usually very high. Moreover, they are deficient in nonlinear analysis, leading to no way to improvement but experimentally choosing parameters and sizes in ad hoc fashion. In this paper, the author presents new architectures for autoregressive prediction based upon statistical analysis of nonlinearity and design algorithm based on steepest descent scheme and correlation maximization. Instead of a fixed configuration, a prediction model begins with a linear model, then learns and grows to a more sophisticated structure step by step, creating a minimal structure for a certain objective. It adaptively learns much faster than existing algorithms. The model determines its own size and topology and retains a minimal structure. The proposed scheme is called generalized antoregressive prediction. This technique can be also applied to general ARMA nonlinear prediction. A new speech coding system using the generalised AR prediction is presented, which takes advantages of nonlinearity and parallelism of the proposed AR model. The system outperforms the corresponding linear coders.<>
线性预测是信号处理的一项重要技术,已应用于许多领域。虽然非线性预测已经用一些技术进行了研究,如多层反向传播神经网络,但计算和存储费用通常非常高。此外,它们缺乏非线性分析,导致只能通过实验选择参数和尺寸来改进。本文提出了基于非线性统计分析的自回归预测体系结构和基于最陡下降方案和相关最大化的自回归预测设计算法。预测模型不是固定的配置,而是从一个线性模型开始,然后逐步学习和发展到一个更复杂的结构,为某个目标创建一个最小的结构。它的自适应学习速度比现有算法快得多。模型决定自己的大小和拓扑结构,并保持最小的结构。提出的方案被称为广义反回归预测。该方法也可用于一般的ARMA非线性预测。利用广义AR模型的非线性和并行性,提出了一种新的基于广义AR预测的语音编码系统。该系统优于相应的线性编码器。
{"title":"Generalized autoregressive prediction with application to speech coding","authors":"Zhicheng Wang","doi":"10.1109/ICNN.1994.374287","DOIUrl":"https://doi.org/10.1109/ICNN.1994.374287","url":null,"abstract":"Linear prediction is a major technique of signal processing and has been applied to many areas. Although nonlinear prediction has been investigated with some techniques such as multilayer backpropagation neural networks, the computational and storage expenses are usually very high. Moreover, they are deficient in nonlinear analysis, leading to no way to improvement but experimentally choosing parameters and sizes in ad hoc fashion. In this paper, the author presents new architectures for autoregressive prediction based upon statistical analysis of nonlinearity and design algorithm based on steepest descent scheme and correlation maximization. Instead of a fixed configuration, a prediction model begins with a linear model, then learns and grows to a more sophisticated structure step by step, creating a minimal structure for a certain objective. It adaptively learns much faster than existing algorithms. The model determines its own size and topology and retains a minimal structure. The proposed scheme is called generalized antoregressive prediction. This technique can be also applied to general ARMA nonlinear prediction. A new speech coding system using the generalised AR prediction is presented, which takes advantages of nonlinearity and parallelism of the proposed AR model. The system outperforms the corresponding linear coders.<<ETX>>","PeriodicalId":209128,"journal":{"name":"Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN'94)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128510825","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN'94)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1