首页 > 最新文献

[Proceedings] 1991 IEEE International Joint Conference on Neural Networks最新文献

英文 中文
Behaviors of transform domain backpropagation (BP) algorithm 变换域反向传播(BP)算法的行为
Pub Date : 1991-11-18 DOI: 10.1109/IJCNN.1991.170426
Xiahua Yang, P. Xue
Several discrete orthogonal transforms have been used to study the behaviors of transform-domain backpropagation (BP) algorithms. Two examples of computer simulation show that, on selecting the appropriate parameters and the suitable structures of a neural network, the performance of the transform-domain BP algorithm is somewhat better than that of the original time-domain BP algorithm, regardless of which discrete orthogonal transform is applied. Among the transforms that have been used, the behaviors of the discrete cosine transform (DCT) and an alternative version of it are believed to be the best.<>
利用离散正交变换研究了变换域反向传播(BP)算法的行为。两个计算机仿真实例表明,无论采用哪种离散正交变换,在选择合适的神经网络参数和结构时,变换域BP算法的性能都略好于原时域BP算法。在已经使用的变换中,离散余弦变换(DCT)及其替代版本的行为被认为是最好的。
{"title":"Behaviors of transform domain backpropagation (BP) algorithm","authors":"Xiahua Yang, P. Xue","doi":"10.1109/IJCNN.1991.170426","DOIUrl":"https://doi.org/10.1109/IJCNN.1991.170426","url":null,"abstract":"Several discrete orthogonal transforms have been used to study the behaviors of transform-domain backpropagation (BP) algorithms. Two examples of computer simulation show that, on selecting the appropriate parameters and the suitable structures of a neural network, the performance of the transform-domain BP algorithm is somewhat better than that of the original time-domain BP algorithm, regardless of which discrete orthogonal transform is applied. Among the transforms that have been used, the behaviors of the discrete cosine transform (DCT) and an alternative version of it are believed to be the best.<<ETX>>","PeriodicalId":211135,"journal":{"name":"[Proceedings] 1991 IEEE International Joint Conference on Neural Networks","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1991-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133721426","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Pattern extraction and recognition for noisy images using the three-layered BP model 基于三层BP模型的噪声图像模式提取与识别
Pub Date : 1991-11-18 DOI: 10.1109/IJCNN.1991.170414
K. Imai, K. Gouhara, Y. Uchikawa
The authors present a novel pattern recognition architecture using three-layered backpropagation (BP) models. The proposed architecture consists mainly of the following two completely separate functions: extraction of a target pattern and recognition of the extracted pattern. It is possible that the proposed architecture detects where and what the target pattern is. In order to realize these functions, the following networks are introduced: filtering network, position network, size network, frame-working network, and categorizing networks. Results of handwritten-letter recognition experiments show that the proposed architecture has the ability to recognize a deformed target pattern in an original image with much noise, especially lumped noises.<>
作者提出了一种基于三层反向传播(BP)模型的模式识别体系结构。所提出的体系结构主要包括以下两个完全独立的功能:目标模式的提取和所提取模式的识别。提议的体系结构可以检测目标模式的位置和内容。为了实现这些功能,介绍了过滤网络、位置网络、大小网络、框架网络和分类网络。手写体识别实验结果表明,该方法能够在噪声较大,特别是集总噪声较大的原始图像中识别出变形的目标图案。
{"title":"Pattern extraction and recognition for noisy images using the three-layered BP model","authors":"K. Imai, K. Gouhara, Y. Uchikawa","doi":"10.1109/IJCNN.1991.170414","DOIUrl":"https://doi.org/10.1109/IJCNN.1991.170414","url":null,"abstract":"The authors present a novel pattern recognition architecture using three-layered backpropagation (BP) models. The proposed architecture consists mainly of the following two completely separate functions: extraction of a target pattern and recognition of the extracted pattern. It is possible that the proposed architecture detects where and what the target pattern is. In order to realize these functions, the following networks are introduced: filtering network, position network, size network, frame-working network, and categorizing networks. Results of handwritten-letter recognition experiments show that the proposed architecture has the ability to recognize a deformed target pattern in an original image with much noise, especially lumped noises.<<ETX>>","PeriodicalId":211135,"journal":{"name":"[Proceedings] 1991 IEEE International Joint Conference on Neural Networks","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1991-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134040708","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
A parallel Kalman algorithm for fast learning of multilayer neural networks 多层神经网络快速学习的并行卡尔曼算法
Pub Date : 1991-11-18 DOI: 10.1109/IJCNN.1991.170644
C.-M. Cho, H.-S. Don
A fast learning algorithm is proposed for training of multilayer feedforward neural networks, based on a combination of optimal linear Kalman filtering theory and error propagation. In this algorithm, all the information available from the start of the training process to the current training sample is exploited in the update procedure for the weight vector of each neuron in the network in an efficient parallel recursive method. This innovation is a massively parallel implementation and has better convergence properties than the conventional backpropagation learning technique. Its performance is illustrated on some examples, such as a XOR logical operation and a nonlinear mapping of two continuous signals.<>
将最优线性卡尔曼滤波理论与误差传播理论相结合,提出了一种多层前馈神经网络的快速学习算法。该算法利用从训练过程开始到当前训练样本的所有可用信息,以一种高效的并行递归方法对网络中每个神经元的权向量进行更新。这种创新是一种大规模并行实现,并且比传统的反向传播学习技术具有更好的收敛特性。通过异或逻辑运算和两个连续信号的非线性映射等实例说明了该方法的性能
{"title":"A parallel Kalman algorithm for fast learning of multilayer neural networks","authors":"C.-M. Cho, H.-S. Don","doi":"10.1109/IJCNN.1991.170644","DOIUrl":"https://doi.org/10.1109/IJCNN.1991.170644","url":null,"abstract":"A fast learning algorithm is proposed for training of multilayer feedforward neural networks, based on a combination of optimal linear Kalman filtering theory and error propagation. In this algorithm, all the information available from the start of the training process to the current training sample is exploited in the update procedure for the weight vector of each neuron in the network in an efficient parallel recursive method. This innovation is a massively parallel implementation and has better convergence properties than the conventional backpropagation learning technique. Its performance is illustrated on some examples, such as a XOR logical operation and a nonlinear mapping of two continuous signals.<<ETX>>","PeriodicalId":211135,"journal":{"name":"[Proceedings] 1991 IEEE International Joint Conference on Neural Networks","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1991-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131892497","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Dynamic competitive learning for centroid estimation 质心估计的动态竞争学习
Pub Date : 1991-11-18 DOI: 10.1109/IJCNN.1991.170507
S. Kia, G. Coghill
Presents an analog version of an artificial neural network, termed a differentiator, based on a variation of the competitive learning method. The network is trained in an unsupervised fashion, and it can be used for estimating the centroids of clusters of patterns. A dynamic competition is held among the competing neurons in adaptation to the input patterns with the aid of a novel type of neuron called control neuron. The output of the control neurons provides feedback reinforcement signals to modify the weight vectors during training. The training algorithm is different from conventional competitive learning methods in the sense that all the weight vectors are modified at each step of training. Computer simulation results are presented which demonstrate the behavior of the differentiator in estimating the class centroids. The results indicate the high power of dynamic competitive learning as well as the fast convergence rates of the weight vectors.<>
基于竞争学习方法的一种变体,提出了一种人工神经网络的模拟版本,称为微分器。该网络以无监督的方式进行训练,并可用于估计模式簇的质心。在一种称为控制神经元的新型神经元的帮助下,竞争神经元之间进行动态竞争以适应输入模式。在训练过程中,控制神经元的输出提供反馈强化信号来修改权向量。该训练算法与传统的竞争性学习方法的不同之处在于,在训练的每一步都对所有的权向量进行修改。计算机仿真结果证明了微分器在估计类质心时的行为。结果表明,该方法具有较强的动态竞争学习能力和较快的权向量收敛速度。
{"title":"Dynamic competitive learning for centroid estimation","authors":"S. Kia, G. Coghill","doi":"10.1109/IJCNN.1991.170507","DOIUrl":"https://doi.org/10.1109/IJCNN.1991.170507","url":null,"abstract":"Presents an analog version of an artificial neural network, termed a differentiator, based on a variation of the competitive learning method. The network is trained in an unsupervised fashion, and it can be used for estimating the centroids of clusters of patterns. A dynamic competition is held among the competing neurons in adaptation to the input patterns with the aid of a novel type of neuron called control neuron. The output of the control neurons provides feedback reinforcement signals to modify the weight vectors during training. The training algorithm is different from conventional competitive learning methods in the sense that all the weight vectors are modified at each step of training. Computer simulation results are presented which demonstrate the behavior of the differentiator in estimating the class centroids. The results indicate the high power of dynamic competitive learning as well as the fast convergence rates of the weight vectors.<<ETX>>","PeriodicalId":211135,"journal":{"name":"[Proceedings] 1991 IEEE International Joint Conference on Neural Networks","volume":"79 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1991-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134355715","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Speaker-independent syllable recognition by a pyramidical neural net 基于金字塔神经网络的独立于说话人的音节识别
Pub Date : 1991-11-18 DOI: 10.1109/IJCNN.1991.170712
Shulin Yang, Youan Ke, Zhong Wang
The application of the pyramidical multilayered neural net to speaker-independent recognition of isolated Chinese syllables was investigated. The feature extraction algorithm is described. Experiments involving 90 speakers from 25 provinces of China show that accuracies of 82.7% and 87.1% can be achieved, respectively, for ten isolated digits and seven typical syllables, and an over 75% cross-sex recognition rate can be obtained. The results indicate that this neural net technique can be applied to speaker-independent syllable recognition and that its performance is comparable to that of the hidden Markov model method.<>
研究了金字塔多层神经网络在汉语孤立音节非说话人识别中的应用。描述了特征提取算法。对中国25个省份的90名说话者进行的实验表明,对10个孤立数字和7个典型音节的识别准确率分别达到82.7%和87.1%,跨性别识别率达到75%以上。结果表明,该神经网络技术可用于独立于说话人的音节识别,其性能可与隐马尔可夫模型方法相媲美
{"title":"Speaker-independent syllable recognition by a pyramidical neural net","authors":"Shulin Yang, Youan Ke, Zhong Wang","doi":"10.1109/IJCNN.1991.170712","DOIUrl":"https://doi.org/10.1109/IJCNN.1991.170712","url":null,"abstract":"The application of the pyramidical multilayered neural net to speaker-independent recognition of isolated Chinese syllables was investigated. The feature extraction algorithm is described. Experiments involving 90 speakers from 25 provinces of China show that accuracies of 82.7% and 87.1% can be achieved, respectively, for ten isolated digits and seven typical syllables, and an over 75% cross-sex recognition rate can be obtained. The results indicate that this neural net technique can be applied to speaker-independent syllable recognition and that its performance is comparable to that of the hidden Markov model method.<<ETX>>","PeriodicalId":211135,"journal":{"name":"[Proceedings] 1991 IEEE International Joint Conference on Neural Networks","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1991-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130336016","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
An enhancement to MLP model to enforce closed decision regions 对MLP模型的增强,以实现封闭决策区域
Pub Date : 1991-11-18 DOI: 10.1109/IJCNN.1991.170486
R. Gemello, F. Mana
Describes a modification of the basic MLP (multilayer perceptron) model implemented to improve its capability to enforce closed decision regions. The authors' proposal is to use hyperspheres instead of hyperplanes on the first hidden layer, and in turn combine them through the next layers. After training, the decision regions will be naturally closed because they are built on simple computational elements which will fire only if the pattern will fall in the hypersphere receptive fields. The training is achieved by applying a modification of the basic backpropagation error without use of ad-hoc algorithms. A two-dimensional example is reported.<>
描述了对基本MLP(多层感知器)模型的修改,以提高其执行封闭决策区域的能力。作者的建议是在第一个隐藏层上使用超球而不是超平面,然后通过下一层将它们组合起来。训练后,决策区域将自然关闭,因为它们是建立在简单的计算元素上的,只有当模式落在超球接受域中时,这些计算元素才会被触发。训练是在不使用自组织算法的情况下,通过对基本反向传播误差进行修正来实现的。举一个二维的例子
{"title":"An enhancement to MLP model to enforce closed decision regions","authors":"R. Gemello, F. Mana","doi":"10.1109/IJCNN.1991.170486","DOIUrl":"https://doi.org/10.1109/IJCNN.1991.170486","url":null,"abstract":"Describes a modification of the basic MLP (multilayer perceptron) model implemented to improve its capability to enforce closed decision regions. The authors' proposal is to use hyperspheres instead of hyperplanes on the first hidden layer, and in turn combine them through the next layers. After training, the decision regions will be naturally closed because they are built on simple computational elements which will fire only if the pattern will fall in the hypersphere receptive fields. The training is achieved by applying a modification of the basic backpropagation error without use of ad-hoc algorithms. A two-dimensional example is reported.<<ETX>>","PeriodicalId":211135,"journal":{"name":"[Proceedings] 1991 IEEE International Joint Conference on Neural Networks","volume":"8 6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1991-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130337609","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Line-end detection and boundary gap completion in an EDANN module for orientation EDANN定位模块中的线端检测和边界间隙补全
Pub Date : 1991-11-18 DOI: 10.1109/IJCNN.1991.170597
M. V. Van Hulle, T. Tollenaere, G. Orban
Explores two sources of inaccuracies originating from the use of local line detectors for inferring curve and boundary traces: (1) due to the position uncertainty of the local line detectors, ends of thin lines are not easily detected, even if cross-orientation inhibition is applied; and (2) due to the limited ability of the local line detectors to assess more global trace information gaps appear in the curve and boundary extracted. It is shown how a single EDANN (entropy drive artificial neural networks) module processing the orientation of illumination contrast compensates for these inaccuracies by performing a two-stage detection process, a competitive and a cooperative one. In the competitive stage, a vector field of tangents to curves and boundaries is extracted by using elongated receptive fields. In the cooperative stage, line-ends are extracted and boundary gaps are bridged by broadening the neuron's orientation tuning curves.<>
探讨了使用局部线检测器推断曲线和边界轨迹时产生的两个不准确性来源:(1)由于局部线检测器的位置不确定,即使应用了交叉方向抑制,也不容易检测到细线的末端;(2)由于局部线检测器评估更多全局轨迹信息的能力有限,在提取的曲线和边界中出现间隙。它展示了一个处理照明对比度方向的EDANN(熵驱动人工神经网络)模块如何通过执行两阶段检测过程(竞争和合作)来补偿这些不准确性。在竞争阶段,利用延长的接受场提取曲线和边界的切线向量场。在合作阶段,提取线端点,并通过拓宽神经元的方向调整曲线来桥接边界间隙。
{"title":"Line-end detection and boundary gap completion in an EDANN module for orientation","authors":"M. V. Van Hulle, T. Tollenaere, G. Orban","doi":"10.1109/IJCNN.1991.170597","DOIUrl":"https://doi.org/10.1109/IJCNN.1991.170597","url":null,"abstract":"Explores two sources of inaccuracies originating from the use of local line detectors for inferring curve and boundary traces: (1) due to the position uncertainty of the local line detectors, ends of thin lines are not easily detected, even if cross-orientation inhibition is applied; and (2) due to the limited ability of the local line detectors to assess more global trace information gaps appear in the curve and boundary extracted. It is shown how a single EDANN (entropy drive artificial neural networks) module processing the orientation of illumination contrast compensates for these inaccuracies by performing a two-stage detection process, a competitive and a cooperative one. In the competitive stage, a vector field of tangents to curves and boundaries is extracted by using elongated receptive fields. In the cooperative stage, line-ends are extracted and boundary gaps are bridged by broadening the neuron's orientation tuning curves.<<ETX>>","PeriodicalId":211135,"journal":{"name":"[Proceedings] 1991 IEEE International Joint Conference on Neural Networks","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1991-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124289002","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
PPNN: a faster learning and better generalizing neural net PPNN:一个更快的学习和更好的泛化神经网络
Pub Date : 1991-11-18 DOI: 10.1109/IJCNN.1991.170513
B. Xu, L. Zheng
It is pointed out that the planar topology of the current backpropagation neural network (BPNN) sets limits to the solution of the slow convergence rate problem, local minima, and other problems associated with BPNN. The parallel probabilistic neural network (PPNN) using a novel neural network topology, stereotopology, is proposed to overcome these problems. The learning ability and the generation ability of BPNN and PPNN are compared for several problems. Simulation results show that PPNN was capable of learning various kinds of problems much faster than BPNN, and also generalized better than BPNN. It is shown that the faster, universal learnability of PPNN was due to the parallel characteristic of PPNN's stereotopology, and the better generalization ability came from the probabilistic characteristic of PPNN's memory retrieval rule.<>
指出当前反向传播神经网络(BPNN)的平面拓扑结构限制了其收敛速度慢问题、局部极小值问题和其他与之相关的问题的解决。并行概率神经网络(PPNN)采用一种新颖的神经网络拓扑——立体拓扑,克服了这些问题。针对几个问题,比较了BPNN和PPNN的学习能力和生成能力。仿真结果表明,PPNN对各种问题的学习速度比BPNN快得多,泛化能力也比BPNN好。结果表明,PPNN具有较快的通用学习性是由于其立体拓扑结构的并行性,而较好的泛化能力是由于其记忆检索规则的概率性。
{"title":"PPNN: a faster learning and better generalizing neural net","authors":"B. Xu, L. Zheng","doi":"10.1109/IJCNN.1991.170513","DOIUrl":"https://doi.org/10.1109/IJCNN.1991.170513","url":null,"abstract":"It is pointed out that the planar topology of the current backpropagation neural network (BPNN) sets limits to the solution of the slow convergence rate problem, local minima, and other problems associated with BPNN. The parallel probabilistic neural network (PPNN) using a novel neural network topology, stereotopology, is proposed to overcome these problems. The learning ability and the generation ability of BPNN and PPNN are compared for several problems. Simulation results show that PPNN was capable of learning various kinds of problems much faster than BPNN, and also generalized better than BPNN. It is shown that the faster, universal learnability of PPNN was due to the parallel characteristic of PPNN's stereotopology, and the better generalization ability came from the probabilistic characteristic of PPNN's memory retrieval rule.<<ETX>>","PeriodicalId":211135,"journal":{"name":"[Proceedings] 1991 IEEE International Joint Conference on Neural Networks","volume":"69 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1991-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124836946","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Implementation of visual reconstruction networks-Alternatives to resistive networks 视觉重建网络的实现——电阻网络的替代方案
Pub Date : 1991-11-18 DOI: 10.1109/IJCNN.1991.170649
D. Mansor, D. Suter
The resistive grid approach has been adopted by the Harris coupled depth-slope analog network and generalized for regularization involving arbitrary degrees of smoothness. The authors consider implementations of arbitrary order regularization networks which do not require resistive grids. The approach followed is to generalize the original formulation of J.G. Harris (1987) and then to follow alternative paths to analog circuit realization allowed by the generalization.<>
Harris耦合深度-坡度模拟网络采用了电阻网格方法,并对涉及任意平滑度的正则化进行了推广。作者考虑了不需要电阻网格的任意阶正则化网络的实现。所采用的方法是推广J.G. Harris(1987)的原始公式,然后遵循推广允许的模拟电路实现的替代路径。
{"title":"Implementation of visual reconstruction networks-Alternatives to resistive networks","authors":"D. Mansor, D. Suter","doi":"10.1109/IJCNN.1991.170649","DOIUrl":"https://doi.org/10.1109/IJCNN.1991.170649","url":null,"abstract":"The resistive grid approach has been adopted by the Harris coupled depth-slope analog network and generalized for regularization involving arbitrary degrees of smoothness. The authors consider implementations of arbitrary order regularization networks which do not require resistive grids. The approach followed is to generalize the original formulation of J.G. Harris (1987) and then to follow alternative paths to analog circuit realization allowed by the generalization.<<ETX>>","PeriodicalId":211135,"journal":{"name":"[Proceedings] 1991 IEEE International Joint Conference on Neural Networks","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1991-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124889787","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Dynamic channel assignment for cellular mobile radio system using feedforward neural networks 基于前馈神经网络的蜂窝移动无线电系统动态信道分配
Pub Date : 1991-11-18 DOI: 10.1109/IJCNN.1991.170567
P.T.H. Chan, M. Palaniswami, D. Everitt
Conventional dynamic channel assignment schemes are both time-consuming and algorithmically complex. An alternative approach using a multilayered feedforward neural network model is examined. The results of the neural network approach are compared with those of a maximum packing strategy technique. The comparison shows that the neural networks approach is well-suited to the dynamic channel allocation problem.<>
传统的动态信道分配方案既耗时又算法复杂。研究了一种使用多层前馈神经网络模型的替代方法。将神经网络方法的结果与最大包装策略技术的结果进行了比较。比较表明,神经网络方法非常适合于动态信道分配问题。
{"title":"Dynamic channel assignment for cellular mobile radio system using feedforward neural networks","authors":"P.T.H. Chan, M. Palaniswami, D. Everitt","doi":"10.1109/IJCNN.1991.170567","DOIUrl":"https://doi.org/10.1109/IJCNN.1991.170567","url":null,"abstract":"Conventional dynamic channel assignment schemes are both time-consuming and algorithmically complex. An alternative approach using a multilayered feedforward neural network model is examined. The results of the neural network approach are compared with those of a maximum packing strategy technique. The comparison shows that the neural networks approach is well-suited to the dynamic channel allocation problem.<<ETX>>","PeriodicalId":211135,"journal":{"name":"[Proceedings] 1991 IEEE International Joint Conference on Neural Networks","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1991-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132046014","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
期刊
[Proceedings] 1991 IEEE International Joint Conference on Neural Networks
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1