首页 > 最新文献

Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN'94)最新文献

英文 中文
Neural network hardware performance criteria 神经网络硬件性能标准
Pub Date : 2002-08-06 DOI: 10.1109/ICNN.1994.374460
E. V. Keulen, S. Colak, H. Withagen, Hans Hegt
Nowadays, many real world problems need fast processing neural networks to come up with a solution in real time. Therefore hardware implementation becomes indispensable. The problem is then to choose the right chip that is to be used for a particular application. For this, a proper set of hardware performance criteria is needed to be able to compare the performance of neural network chips. The most important criterion is related to the speed a network processes information with a given accuracy. For this a new criterion is proposed. The 'effective number of connection bits' represents the effective accuracy of a chip. The '(effective) connection primitives per second' criterion now provides a new speed criterion normalized to the amount of information value that is processed in a connection. In addition to this we also propose another new criterion called 'reconfigurability number' as a measure for the reconfigurability and size of a chip. Using these criteria gives a much more neutral view of the performance of a neural network chip than the existing conventional criteria, such as 'connections per second'.<>
如今,许多现实世界的问题都需要快速处理的神经网络来实时解决。因此,硬件实现变得必不可少。接下来的问题是为特定的应用选择合适的芯片。为此,需要一套合适的硬件性能标准来比较神经网络芯片的性能。最重要的标准与网络在给定精度下处理信息的速度有关。为此提出了一个新的判据。“有效连接位数”代表芯片的有效精度。“每秒(有效)连接原语”标准现在提供了一个新的速度标准,该标准标准化为在连接中处理的信息值的数量。除此之外,我们还提出了另一个称为“可重构数”的新标准,作为衡量芯片可重构性和尺寸的标准。与现有的传统标准(如“每秒连接数”)相比,使用这些标准可以更中性地看待神经网络芯片的性能。
{"title":"Neural network hardware performance criteria","authors":"E. V. Keulen, S. Colak, H. Withagen, Hans Hegt","doi":"10.1109/ICNN.1994.374460","DOIUrl":"https://doi.org/10.1109/ICNN.1994.374460","url":null,"abstract":"Nowadays, many real world problems need fast processing neural networks to come up with a solution in real time. Therefore hardware implementation becomes indispensable. The problem is then to choose the right chip that is to be used for a particular application. For this, a proper set of hardware performance criteria is needed to be able to compare the performance of neural network chips. The most important criterion is related to the speed a network processes information with a given accuracy. For this a new criterion is proposed. The 'effective number of connection bits' represents the effective accuracy of a chip. The '(effective) connection primitives per second' criterion now provides a new speed criterion normalized to the amount of information value that is processed in a connection. In addition to this we also propose another new criterion called 'reconfigurability number' as a measure for the reconfigurability and size of a chip. Using these criteria gives a much more neutral view of the performance of a neural network chip than the existing conventional criteria, such as 'connections per second'.<<ETX>>","PeriodicalId":209128,"journal":{"name":"Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN'94)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-08-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132895885","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 19
A neural network model of the binocular fusion in the human vision 人类视觉双目融合的神经网络模型
Pub Date : 2002-08-06 DOI: 10.1109/ICNN.1994.374883
Jing-long Wu, Y. Nishikawa
This paper proposes a model of binocular fusion based on psychological experimental results and physiological knowledge. Considering the psychological results and the physiological structure, the authors assume that the binocular information is processed by several binocular channels having different spatial characteristics from low spatial frequency to high spatial frequency. In order to examine the mechanism of binocular fusion, the authors construct a five layer neural network model, and train it by the backpropagation learning algorithm using psychological experimental data. After completion of learning, the generalization capability of the network is examined. Further, the response functions of the hidden units have been examined, which suggested that the hidden units have a spatial selective characteristic.<>
本文提出了一种基于心理学实验结果和生理学知识的双目融合模型。考虑到心理结果和生理结构,作者假设双目信息是由从低空间频率到高空间频率具有不同空间特征的几个双目通道处理的。为了研究双眼融合的机制,作者构建了一个五层神经网络模型,并利用心理学实验数据,采用反向传播学习算法对其进行训练。学习完成后,检验网络的泛化能力。进一步分析了隐藏单元的响应函数,表明隐藏单元具有空间选择性。
{"title":"A neural network model of the binocular fusion in the human vision","authors":"Jing-long Wu, Y. Nishikawa","doi":"10.1109/ICNN.1994.374883","DOIUrl":"https://doi.org/10.1109/ICNN.1994.374883","url":null,"abstract":"This paper proposes a model of binocular fusion based on psychological experimental results and physiological knowledge. Considering the psychological results and the physiological structure, the authors assume that the binocular information is processed by several binocular channels having different spatial characteristics from low spatial frequency to high spatial frequency. In order to examine the mechanism of binocular fusion, the authors construct a five layer neural network model, and train it by the backpropagation learning algorithm using psychological experimental data. After completion of learning, the generalization capability of the network is examined. Further, the response functions of the hidden units have been examined, which suggested that the hidden units have a spatial selective characteristic.<<ETX>>","PeriodicalId":209128,"journal":{"name":"Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN'94)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-08-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129092928","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Accelerating the training of feedforward neural networks using generalized Hebbian rules for initializing the internal representations 利用广义Hebbian规则初始化内部表征加速前馈神经网络的训练
Pub Date : 1996-03-01 DOI: 10.1109/ICNN.1994.374134
N. Karayiannis
It is argued in this paper that most of the problems associated with the application of existing learning algorithms in complex training tasks can be overcome by using only the input data to determine the role of the hidden units, which form a data compression or a data expansion layer. The initial set of internal representations can be formed through an unsupervised learning process applied before the supervised training algorithm. The synaptic weights that connect the input of the network with the hidden units can be determined through various linear or nonlinear variations of a generalized Hebbian learning rule, known as the Oja's rule. Several experiments indicated that the use of the proposed initialization of the internal representations improves significantly the convergence of various gradient-descent-based algorithms used to perform nontrivial training tasks.<>
本文认为,通过仅使用输入数据来确定隐藏单元的作用,可以克服与现有学习算法在复杂训练任务中的应用相关的大多数问题,这些隐藏单元构成数据压缩层或数据扩展层。内部表示的初始集可以通过在监督训练算法之前应用的无监督学习过程来形成。连接网络输入和隐藏单元的突触权重可以通过广义Hebbian学习规则(即Oja规则)的各种线性或非线性变化来确定。几个实验表明,使用所提出的内部表示初始化显著提高了用于执行非平凡训练任务的各种基于梯度下降的算法的收敛性
{"title":"Accelerating the training of feedforward neural networks using generalized Hebbian rules for initializing the internal representations","authors":"N. Karayiannis","doi":"10.1109/ICNN.1994.374134","DOIUrl":"https://doi.org/10.1109/ICNN.1994.374134","url":null,"abstract":"It is argued in this paper that most of the problems associated with the application of existing learning algorithms in complex training tasks can be overcome by using only the input data to determine the role of the hidden units, which form a data compression or a data expansion layer. The initial set of internal representations can be formed through an unsupervised learning process applied before the supervised training algorithm. The synaptic weights that connect the input of the network with the hidden units can be determined through various linear or nonlinear variations of a generalized Hebbian learning rule, known as the Oja's rule. Several experiments indicated that the use of the proposed initialization of the internal representations improves significantly the convergence of various gradient-descent-based algorithms used to perform nontrivial training tasks.<<ETX>>","PeriodicalId":209128,"journal":{"name":"Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN'94)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1996-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115889155","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 25
Improving generalization performance by information minimization 通过信息最小化提高泛化性能
Pub Date : 1995-02-25 DOI: 10.1109/ICNN.1994.374153
R. Kamimura, T. Takagi, S. Nakanishi
In this paper, we attempt to show that the information stored in networks must be as small as possible for the improvement of the generalization performance under the condition that the networks can produce targets with appropriate accuracy. The information is defined by the difference between maximum entropy or uncertainty and observed entropy. Borrowing a definition of fuzzy entropy, the uncertainty function is defined for the internal representation and represented by the equation: -/spl upsi//sub i/ log /spl upsi//sub i/-(1-/spl upsi//sub i/) log (1-/spl upsi//sub i/), where /spl upsi//sub i/ is a hidden unit activity. After having formulated an update rule for the minimization of the information, we applied the method to a problem of language acquisition: the inference of the past tense forms of regular verbs. Experimental results confirmed that by our method, the information was significantly decreased and the generalization performance was greatly improved.<>
在本文中,我们试图证明,在网络能够产生适当精度的目标的条件下,存储在网络中的信息必须尽可能小,以提高泛化性能。信息由最大熵或不确定性与观测到的熵之差来定义。借鉴模糊熵的定义,为内部表示定义不确定性函数,表示为:-/spl upsi//sub i/ log /spl upsi//sub i/-(1-/spl upsi//sub i/) log (1-/spl upsi//sub i/),其中/spl upsi//sub i/是一个隐藏单元活动。在制定了信息最小化的更新规则后,我们将该方法应用于语言习得问题:规则动词过去时形式的推断。实验结果表明,该方法有效地减少了图像的信息量,提高了图像的泛化性能。
{"title":"Improving generalization performance by information minimization","authors":"R. Kamimura, T. Takagi, S. Nakanishi","doi":"10.1109/ICNN.1994.374153","DOIUrl":"https://doi.org/10.1109/ICNN.1994.374153","url":null,"abstract":"In this paper, we attempt to show that the information stored in networks must be as small as possible for the improvement of the generalization performance under the condition that the networks can produce targets with appropriate accuracy. The information is defined by the difference between maximum entropy or uncertainty and observed entropy. Borrowing a definition of fuzzy entropy, the uncertainty function is defined for the internal representation and represented by the equation: -/spl upsi//sub i/ log /spl upsi//sub i/-(1-/spl upsi//sub i/) log (1-/spl upsi//sub i/), where /spl upsi//sub i/ is a hidden unit activity. After having formulated an update rule for the minimization of the information, we applied the method to a problem of language acquisition: the inference of the past tense forms of regular verbs. Experimental results confirmed that by our method, the information was significantly decreased and the generalization performance was greatly improved.<<ETX>>","PeriodicalId":209128,"journal":{"name":"Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN'94)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-02-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122459108","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 42
Improvement of speed control performance using PID type neurocontroller in an electric vehicle system 用PID型神经控制器改进电动汽车系统的速度控制性能
Pub Date : 1994-12-31 DOI: 10.1109/ICNN.1994.374640
S. Matsumura, S. Omatu, H. Higasa
In order to develop an efficient driving system for electric vehicle (EV), a testing system using motors has been built to simulate the driving performance of EVs. In the testing system, the PID controller is used to control rotating speed of motor when the EV drives. In this paper, in order to improve the performance of speed control, a neural network is applied to tuning parameters of PI controller. It is shown,through experiments that a neural network can reduce output error effectively while the PI controller parameters are being tuned online.<>
为了开发高效的电动汽车驱动系统,建立了电动汽车电机驱动测试系统,模拟了电动汽车的驱动性能。在测试系统中,采用PID控制器控制电动汽车驱动时电机的转速。为了提高速度控制的性能,本文采用神经网络对PI控制器的参数进行整定。实验表明,在在线调整PI控制器参数的同时,神经网络可以有效地减小输出误差
{"title":"Improvement of speed control performance using PID type neurocontroller in an electric vehicle system","authors":"S. Matsumura, S. Omatu, H. Higasa","doi":"10.1109/ICNN.1994.374640","DOIUrl":"https://doi.org/10.1109/ICNN.1994.374640","url":null,"abstract":"In order to develop an efficient driving system for electric vehicle (EV), a testing system using motors has been built to simulate the driving performance of EVs. In the testing system, the PID controller is used to control rotating speed of motor when the EV drives. In this paper, in order to improve the performance of speed control, a neural network is applied to tuning parameters of PI controller. It is shown,through experiments that a neural network can reduce output error effectively while the PI controller parameters are being tuned online.<<ETX>>","PeriodicalId":209128,"journal":{"name":"Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN'94)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-12-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133145995","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
GMDP: a novel unified neuron model for multilayer feedforward neural networks GMDP:一种新的多层前馈神经网络统一神经元模型
Pub Date : 1994-12-01 DOI: 10.1109/ICNN.1994.374147
Sheng-Tun Li, Yiwei Chen, E. Leiss
A variety of neural models, especially higher-order networks, are known to be computationally powerful for complex applications. While they have advantages over traditional multilayer perceptrons, the nonuniformity in their network structures and learning algorithms creates practical problems. Thus there is a need for a framework that unifies these various models. This paper presents a novel neuron model, called generalized multi-dendrite product (GMDP) unit. Multilayer feedforward neural networks with GMDP units are shown to be capable of realizing higher-order neural networks. The standard backpropagation learning rule is extended to this neural network. Simulation results show that single layer GMDP networks provide an efficient model for solving general problems on function approximation and pattern classification.<>
各种各样的神经模型,特别是高阶网络,对于复杂的应用具有强大的计算能力。虽然它们比传统的多层感知器有优势,但它们的网络结构和学习算法的不均匀性带来了实际问题。因此,需要一个框架来统一这些不同的模型。本文提出了一种新的神经元模型——广义多树突积单元。采用GMDP单元的多层前馈神经网络能够实现高阶神经网络。将标准的反向传播学习规则推广到该神经网络中。仿真结果表明,单层GMDP网络为解决一般的函数逼近和模式分类问题提供了一种有效的模型。
{"title":"GMDP: a novel unified neuron model for multilayer feedforward neural networks","authors":"Sheng-Tun Li, Yiwei Chen, E. Leiss","doi":"10.1109/ICNN.1994.374147","DOIUrl":"https://doi.org/10.1109/ICNN.1994.374147","url":null,"abstract":"A variety of neural models, especially higher-order networks, are known to be computationally powerful for complex applications. While they have advantages over traditional multilayer perceptrons, the nonuniformity in their network structures and learning algorithms creates practical problems. Thus there is a need for a framework that unifies these various models. This paper presents a novel neuron model, called generalized multi-dendrite product (GMDP) unit. Multilayer feedforward neural networks with GMDP units are shown to be capable of realizing higher-order neural networks. The standard backpropagation learning rule is extended to this neural network. Simulation results show that single layer GMDP networks provide an efficient model for solving general problems on function approximation and pattern classification.<<ETX>>","PeriodicalId":209128,"journal":{"name":"Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN'94)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128186803","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Prediction error of stochastic learning machine 随机学习机的预测误差
Pub Date : 1994-12-01 DOI: 10.1109/ICNN.1994.374346
K. Ikeda, Noboru Murata, S. Amari
The more the number of training examples included, the better a learning machine will behave. It is an important to know how fast and how well the behavior is improved. The average prediction error is one of the most popular criteria to evaluate the behavior. We have regarded the machine learning from the point of view of parameter estimation and derived the average prediction error of stochastic dichotomy machines by the information geometrical method.<>
包含的训练样本数量越多,学习机器的表现就越好。了解行为改善的速度和程度是很重要的。平均预测误差是评价行为最常用的标准之一。我们从参数估计的角度来看待机器学习,并利用信息几何方法推导了随机二分类机的平均预测误差
{"title":"Prediction error of stochastic learning machine","authors":"K. Ikeda, Noboru Murata, S. Amari","doi":"10.1109/ICNN.1994.374346","DOIUrl":"https://doi.org/10.1109/ICNN.1994.374346","url":null,"abstract":"The more the number of training examples included, the better a learning machine will behave. It is an important to know how fast and how well the behavior is improved. The average prediction error is one of the most popular criteria to evaluate the behavior. We have regarded the machine learning from the point of view of parameter estimation and derived the average prediction error of stochastic dichotomy machines by the information geometrical method.<<ETX>>","PeriodicalId":209128,"journal":{"name":"Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN'94)","volume":"166 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132958022","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Wavelet neural networks are asymptotically optimal approximators for functions of one variable 小波神经网络是单变量函数的渐近最优逼近器
Pub Date : 1994-12-01 DOI: 10.1109/ICNN.1994.374179
V. Kreinovich, O. Sirisaengtaksin, S. Cabrera
Neural networks are universal approximators. For example, it has been proved (K. Hornik et al., 1989) that for every /spl epsiv/>0 an arbitrary continuous function on a compact set can be /spl epsiv/-approximated by a 3-layer neural network. This and other results prove that in principle, any function (e.g., any control) can be implemented by an appropriate neural network. But why neural networks? In addition to neural networks, an arbitrary continuous function can be also approximated by polynomials, etc. What is so special about neural networks that make them preferable approximators? To compare different approximators, one can compare the number of bits that we must store in order to be able to reconstruct a function with a given precision /spl epsiv/. For neural networks, we must store weights and thresholds. For polynomials, we must store coefficients, etc. We consider functions of one variable, and show that for some special neurons (corresponding to wavelets), neural networks are optimal approximators in the sense that they require (asymptotically) the smallest possible number of bits.<>
神经网络是通用逼近器。例如,已经证明(K. Hornik et al., 1989)对于每一个/spl epsiv/>0,紧集上的任意连续函数都可以用三层神经网络逼近/spl epsiv/-。这一结果和其他结果证明,原则上,任何函数(例如,任何控制)都可以由适当的神经网络实现。但为什么是神经网络呢?除了神经网络,任意连续函数也可以用多项式等逼近。神经网络有什么特别之处使其成为更好的近似器?为了比较不同的近似值,可以比较我们必须存储的比特数,以便能够以给定的精度/spl epsiv/重建函数。对于神经网络,我们必须存储权值和阈值。对于多项式,我们必须存储系数,等等。我们考虑单变量的函数,并表明对于一些特殊的神经元(对应于小波),神经网络是最优逼近器,因为它们(渐近地)需要尽可能少的比特数。
{"title":"Wavelet neural networks are asymptotically optimal approximators for functions of one variable","authors":"V. Kreinovich, O. Sirisaengtaksin, S. Cabrera","doi":"10.1109/ICNN.1994.374179","DOIUrl":"https://doi.org/10.1109/ICNN.1994.374179","url":null,"abstract":"Neural networks are universal approximators. For example, it has been proved (K. Hornik et al., 1989) that for every /spl epsiv/>0 an arbitrary continuous function on a compact set can be /spl epsiv/-approximated by a 3-layer neural network. This and other results prove that in principle, any function (e.g., any control) can be implemented by an appropriate neural network. But why neural networks? In addition to neural networks, an arbitrary continuous function can be also approximated by polynomials, etc. What is so special about neural networks that make them preferable approximators? To compare different approximators, one can compare the number of bits that we must store in order to be able to reconstruct a function with a given precision /spl epsiv/. For neural networks, we must store weights and thresholds. For polynomials, we must store coefficients, etc. We consider functions of one variable, and show that for some special neurons (corresponding to wavelets), neural networks are optimal approximators in the sense that they require (asymptotically) the smallest possible number of bits.<<ETX>>","PeriodicalId":209128,"journal":{"name":"Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN'94)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115443218","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 41
Multistage neural network for pattern recognition in mammogram screening 多阶段神经网络在乳房x光筛查中的模式识别
Pub Date : 1994-12-01 DOI: 10.1109/ICNN.1994.374887
B. Zheng, W. Qian, L. Clarke
A novel multistage neural network (MSNN) is proposed for locating and classification of micro-calcification in digital mammography. Backpropagation (BP) with Kalman filtering (KF) is used for training the MSNN. A new nonlinear decision method is proposed to improve the performance of the classification. The experimental results show that the sensitivity of this classification/detection is 100% with the false positive detection rate of less than 1 micro-calcification clusters (MCCs) per image. The proposed methods are automatic or operator independent and provide realistic image processing times as required for breast cancer screening programs. Full clinical analysis is planned using large databases.<>
提出了一种新的多级神经网络(MSNN)用于数字乳房x线摄影中微钙化的定位和分类。利用反向传播(BP)和卡尔曼滤波(KF)对MSNN进行训练。为了提高分类性能,提出了一种新的非线性决策方法。实验结果表明,该分类/检测的灵敏度为100%,每幅图像的微钙化簇(mcc)的假阳性检出率小于1个。所提出的方法是自动的或独立于操作人员的,并提供乳腺癌筛查项目所需的真实图像处理时间。完整的临床分析计划使用大型数据库。
{"title":"Multistage neural network for pattern recognition in mammogram screening","authors":"B. Zheng, W. Qian, L. Clarke","doi":"10.1109/ICNN.1994.374887","DOIUrl":"https://doi.org/10.1109/ICNN.1994.374887","url":null,"abstract":"A novel multistage neural network (MSNN) is proposed for locating and classification of micro-calcification in digital mammography. Backpropagation (BP) with Kalman filtering (KF) is used for training the MSNN. A new nonlinear decision method is proposed to improve the performance of the classification. The experimental results show that the sensitivity of this classification/detection is 100% with the false positive detection rate of less than 1 micro-calcification clusters (MCCs) per image. The proposed methods are automatic or operator independent and provide realistic image processing times as required for breast cancer screening programs. Full clinical analysis is planned using large databases.<<ETX>>","PeriodicalId":209128,"journal":{"name":"Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN'94)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123891346","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 34
A comparison of neural network and fuzzy c-means methods in bladder cancer cell classification 神经网络与模糊c均值方法在膀胱癌细胞分类中的比较
Pub Date : 1994-12-01 DOI: 10.1109/ICNN.1994.374891
Y. Hu, K. Ashenayi, R. Veltri, G. O'Dowd, G. Miller, R. Hurst, R. Bonner
We report the performances of cancer cell classification by using supervised and unsupervised learning techniques. A single hidden layer feedforward NN with error back-propagation training is adopted for supervised learning, and c-means clustering methods, fuzzy and nonfuzzy, are used for unsupervised learning. Network configurations with various activation functions, namely sigmoid, sinusoid and gaussian, are studied. A set of features, including cell size, average intensity, texture, shape factor and pgDNA are selected as the input for the network. These features, in particular the texture information, are shown to be very effective in capturing the discriminate information in cancer cells. It is found, based on the data from 467 cell images from six cases, the neural network approach achieves a classification rate of 96.9% while fuzzy c-means scores 76.5%.<>
我们报告了使用监督和非监督学习技术对癌细胞分类的性能。有监督学习采用带有误差反向传播训练的单隐层前馈神经网络,无监督学习采用模糊和非模糊的c均值聚类方法。研究了具有不同激活函数(s型、正弦和高斯)的网络结构。选择一组特征,包括细胞大小、平均强度、纹理、形状因子和pgDNA作为网络的输入。这些特征,特别是纹理信息,在癌细胞中被证明是非常有效的识别信息。结果表明,基于6个病例的467张细胞图像数据,神经网络方法的分类率为96.9%,模糊c均值分类率为76.5%。
{"title":"A comparison of neural network and fuzzy c-means methods in bladder cancer cell classification","authors":"Y. Hu, K. Ashenayi, R. Veltri, G. O'Dowd, G. Miller, R. Hurst, R. Bonner","doi":"10.1109/ICNN.1994.374891","DOIUrl":"https://doi.org/10.1109/ICNN.1994.374891","url":null,"abstract":"We report the performances of cancer cell classification by using supervised and unsupervised learning techniques. A single hidden layer feedforward NN with error back-propagation training is adopted for supervised learning, and c-means clustering methods, fuzzy and nonfuzzy, are used for unsupervised learning. Network configurations with various activation functions, namely sigmoid, sinusoid and gaussian, are studied. A set of features, including cell size, average intensity, texture, shape factor and pgDNA are selected as the input for the network. These features, in particular the texture information, are shown to be very effective in capturing the discriminate information in cancer cells. It is found, based on the data from 467 cell images from six cases, the neural network approach achieves a classification rate of 96.9% while fuzzy c-means scores 76.5%.<<ETX>>","PeriodicalId":209128,"journal":{"name":"Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN'94)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121163064","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 29
期刊
Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN'94)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1