首页 > 最新文献

Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN'94)最新文献

英文 中文
Financial forecasting and rules extraction from trained networks 从训练好的网络中进行财务预测和规则提取
Pub Date : 1994-06-27 DOI: 10.1109/ICNN.1994.374745
R. Kane, N. Milgram
This paper describes a forecasting approach using constrained networks. Two complementary approaches are proposed. The main property of the first approach is to lead to an efficient predictive algorithm based on backpropagation. Some units are constrained to hold the logical information of the network whereas the unconstrained unit keep the numerical information. Therefore the task of each unit is defined during the training. The second approach is focused on rules extraction. Using constrained networks, we are able to extract information from trained networks. This property is essential as it is possible to analysis, explain, extract and therefore control what happens inside trained networks. Simulation results for these approaches are reported.<>
本文描述了一种使用约束网络的预测方法。提出了两种互补的方法。第一种方法的主要特点是基于反向传播的有效预测算法。一些单元被约束保存网络的逻辑信息,而不受约束的单元保留数字信息。因此,在训练过程中,每个单元的任务都是明确的。第二种方法侧重于规则提取。使用约束网络,我们能够从训练过的网络中提取信息。这个属性是必不可少的,因为它可以分析、解释、提取并因此控制在训练过的网络中发生的事情。本文报道了这些方法的仿真结果。
{"title":"Financial forecasting and rules extraction from trained networks","authors":"R. Kane, N. Milgram","doi":"10.1109/ICNN.1994.374745","DOIUrl":"https://doi.org/10.1109/ICNN.1994.374745","url":null,"abstract":"This paper describes a forecasting approach using constrained networks. Two complementary approaches are proposed. The main property of the first approach is to lead to an efficient predictive algorithm based on backpropagation. Some units are constrained to hold the logical information of the network whereas the unconstrained unit keep the numerical information. Therefore the task of each unit is defined during the training. The second approach is focused on rules extraction. Using constrained networks, we are able to extract information from trained networks. This property is essential as it is possible to analysis, explain, extract and therefore control what happens inside trained networks. Simulation results for these approaches are reported.<<ETX>>","PeriodicalId":209128,"journal":{"name":"Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN'94)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128434174","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
Structure adaptation in feed-forward neural networks 前馈神经网络的结构自适应
Pub Date : 1994-06-27 DOI: 10.1109/ICNN.1994.374491
K. Khorasani, W. Weng
In this paper two new structures (algorithms) are proposed for adaptively adjusting the network structure. Both neuron pruning and neuron generating are considered for a feedforward neural network. Simulations results are presented to confirm the improvements that are obtained as a result of utilizing the proposed algorithms.<>
本文提出了两种自适应调整网络结构的新结构(算法)。考虑了前馈神经网络的神经元修剪和神经元生成。仿真结果证实了利用所提出的算法所获得的改进。
{"title":"Structure adaptation in feed-forward neural networks","authors":"K. Khorasani, W. Weng","doi":"10.1109/ICNN.1994.374491","DOIUrl":"https://doi.org/10.1109/ICNN.1994.374491","url":null,"abstract":"In this paper two new structures (algorithms) are proposed for adaptively adjusting the network structure. Both neuron pruning and neuron generating are considered for a feedforward neural network. Simulations results are presented to confirm the improvements that are obtained as a result of utilizing the proposed algorithms.<<ETX>>","PeriodicalId":209128,"journal":{"name":"Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN'94)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128655117","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Comparing artificial neural networks to other statistical methods for medical outcome prediction 比较人工神经网络与其他统计方法在医疗结果预测中的应用
Pub Date : 1994-06-27 DOI: 10.1109/ICNN.1994.374560
H. Burke, D. B. Rosen, P. Goodman
Survival prediction is important in cancer because it determines therapy, matches patients for clinical trials, and provides patient information. Is a backpropagation neural network more accurate at predicting survival in breast cancer than the current staging system? For over thirty years cancer outcome prediction has been based on the pTNM staging system. There are two problems with this system: (1) it is not very accurate, and (2) its accuracy can not be improved because predictive variables can not be added to the model without increasing the model's complexity to the point where it is no longer useful to the clinician. Using the area under the curve (AUC) of the receiver operating characteristic, the authors compare the accuracy of the following predictive models: pTNM stage, principal components analysis, classification and regression trees, logistic regression, cascade correlation neural network, conjugate gradient descent neural network, backpropagation neural network, and probabilistic neural network. Using just the TNM variables both the backpropagation neural network, AUC.768, and the probabilistic neural network, AUC.759, are significantly more accurate than the pTNM stage system, AUC.720 (all SEs<.01, p<.01 for both models compared to the pTNM model). Adding variables further increases the prediction accuracy of the backpropagation neural network, AUC.779, and the probabilistic neural network, AUC.777. Adding the new prognostic factors p53 and HER-2/neu increases the backpropagation neural network's accuracy to an AUC of .850. The neural networks perform equally well when applied to another breast cancer data set and to a colorectal cancer data set. Neural networks are able to significantly improve breast cancer outcome prediction accuracy when compared to the TNM stage system. They can combine prognostic factors to further improve accuracy. Neural networks are robust across data bases and cancer sites. Neural networks can perform as well as the best traditional prediction methods, and they can capture the power of nonmonotonic predictors and discover complex genetic interactions.<>
生存预测在癌症中很重要,因为它决定了治疗方法,为临床试验匹配患者,并提供患者信息。反向传播神经网络是否比目前的分期系统更准确地预测乳腺癌患者的生存期?三十多年来,癌症预后预测一直是基于pTNM分期系统。这个系统有两个问题:(1)它不是很准确,(2)它的准确性无法提高,因为在不增加模型复杂性的情况下,无法将预测变量添加到模型中,从而使模型对临床医生不再有用。利用接收机工作特性的曲线下面积(AUC),比较了pTNM阶段、主成分分析、分类与回归树、逻辑回归、级联相关神经网络、共轭梯度下降神经网络、反向传播神经网络和概率神经网络等预测模型的精度。仅使用TNM变量,反向传播神经网络(AUC.768)和概率神经网络(AUC.759)都明显比pTNM阶段系统(AUC.720)更准确。
{"title":"Comparing artificial neural networks to other statistical methods for medical outcome prediction","authors":"H. Burke, D. B. Rosen, P. Goodman","doi":"10.1109/ICNN.1994.374560","DOIUrl":"https://doi.org/10.1109/ICNN.1994.374560","url":null,"abstract":"Survival prediction is important in cancer because it determines therapy, matches patients for clinical trials, and provides patient information. Is a backpropagation neural network more accurate at predicting survival in breast cancer than the current staging system? For over thirty years cancer outcome prediction has been based on the pTNM staging system. There are two problems with this system: (1) it is not very accurate, and (2) its accuracy can not be improved because predictive variables can not be added to the model without increasing the model's complexity to the point where it is no longer useful to the clinician. Using the area under the curve (AUC) of the receiver operating characteristic, the authors compare the accuracy of the following predictive models: pTNM stage, principal components analysis, classification and regression trees, logistic regression, cascade correlation neural network, conjugate gradient descent neural network, backpropagation neural network, and probabilistic neural network. Using just the TNM variables both the backpropagation neural network, AUC.768, and the probabilistic neural network, AUC.759, are significantly more accurate than the pTNM stage system, AUC.720 (all SEs<.01, p<.01 for both models compared to the pTNM model). Adding variables further increases the prediction accuracy of the backpropagation neural network, AUC.779, and the probabilistic neural network, AUC.777. Adding the new prognostic factors p53 and HER-2/neu increases the backpropagation neural network's accuracy to an AUC of .850. The neural networks perform equally well when applied to another breast cancer data set and to a colorectal cancer data set. Neural networks are able to significantly improve breast cancer outcome prediction accuracy when compared to the TNM stage system. They can combine prognostic factors to further improve accuracy. Neural networks are robust across data bases and cancer sites. Neural networks can perform as well as the best traditional prediction methods, and they can capture the power of nonmonotonic predictors and discover complex genetic interactions.<<ETX>>","PeriodicalId":209128,"journal":{"name":"Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN'94)","volume":"111 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124767909","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 43
A Lyapunov machine for stability analysis of nonlinear systems 非线性系统稳定性分析的李雅普诺夫机
Pub Date : 1994-06-27 DOI: 10.1109/ICNN.1994.374324
D. V. Prokhorov
Dynamic analysis of nonlinear system requires tool for study of arbitrary sets of positive semi-trajectories for the system rather than only single semi-trajectories. Such a study is difficult because of very high computational complexity. This paper proposes a Lyapunov machine as a possible tool for stability analysis of nonlinear autonomous systems. The Lyapunov machine is able to test global asymptotic stability, to isolate local asymptotic stability domains and to approximate a Lyapunov function for the system.<>
非线性系统的动力学分析需要研究系统的任意正半轨迹集,而不仅仅是单个半轨迹。这样的研究是困难的,因为非常高的计算复杂度。本文提出了一种Lyapunov机作为非线性自治系统稳定性分析的可能工具。该Lyapunov机能够测试系统的全局渐近稳定性,分离局部渐近稳定性域并近似Lyapunov函数。
{"title":"A Lyapunov machine for stability analysis of nonlinear systems","authors":"D. V. Prokhorov","doi":"10.1109/ICNN.1994.374324","DOIUrl":"https://doi.org/10.1109/ICNN.1994.374324","url":null,"abstract":"Dynamic analysis of nonlinear system requires tool for study of arbitrary sets of positive semi-trajectories for the system rather than only single semi-trajectories. Such a study is difficult because of very high computational complexity. This paper proposes a Lyapunov machine as a possible tool for stability analysis of nonlinear autonomous systems. The Lyapunov machine is able to test global asymptotic stability, to isolate local asymptotic stability domains and to approximate a Lyapunov function for the system.<<ETX>>","PeriodicalId":209128,"journal":{"name":"Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN'94)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124971196","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 18
Recurrent neural networks and Fibonacci numeration system of order s(s/spl ges/2) 递归神经网络与s(s/spl ges/2)阶Fibonacci计数系统
Pub Date : 1994-06-27 DOI: 10.1109/ICNN.1994.374510
M. Yacoub
In the Fibonacci numeration system of order s(s/spl ges/2), every positive integer admits a unique representation which does not contain s consecutive digits equal to 1 (called normal form). We show how this normal form can be obtained from any representation by recurrent neural networks. The addition of two integers in this system and the conversion from a Fibonacci representation to a standard binary representation (and conversely) can also be realized using recurrent neural networks.<>
在s(s/spl ges/2)阶的斐波那契数制中,每一个正整数都有一个不包含等于1的5个连续数字的唯一表示(称为范式)。我们展示了如何通过递归神经网络从任何表示中获得这种范式。在这个系统中,两个整数的加法和从斐波那契表示到标准二进制表示(或反过来)的转换也可以用递归神经网络来实现
{"title":"Recurrent neural networks and Fibonacci numeration system of order s(s/spl ges/2)","authors":"M. Yacoub","doi":"10.1109/ICNN.1994.374510","DOIUrl":"https://doi.org/10.1109/ICNN.1994.374510","url":null,"abstract":"In the Fibonacci numeration system of order s(s/spl ges/2), every positive integer admits a unique representation which does not contain s consecutive digits equal to 1 (called normal form). We show how this normal form can be obtained from any representation by recurrent neural networks. The addition of two integers in this system and the conversion from a Fibonacci representation to a standard binary representation (and conversely) can also be realized using recurrent neural networks.<<ETX>>","PeriodicalId":209128,"journal":{"name":"Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN'94)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129482731","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Pattern theory for character recognition 字符识别的模式理论
Pub Date : 1994-06-27 DOI: 10.1109/ICNN.1994.374939
J. Jean, K. Xue, S. Goel
Pattern theory is an engineering theory of algorithm design which provides a robust characterization of all types of patterns. Similar to logical neural networks, the theory can be used to generalize from a set of training data. However, it optimizes network architectures as well as the "weights" of the resulting machine. In this paper, the application of the theory to character recognition is considered. The application requires a simple extension to the theory and a faster algorithm to perform a basic decomposition operation. Such an algorithm is developed and described in the paper. Some simulation results of the algorithm are also included.<>
模式理论是一种算法设计的工程理论,它提供了所有类型模式的鲁棒特征。与逻辑神经网络类似,该理论可用于从一组训练数据中进行推广。然而,它优化了网络架构以及最终机器的“权重”。本文研究了该理论在字符识别中的应用。该应用程序需要对理论进行简单的扩展,并使用更快的算法来执行基本的分解操作。本文对该算法进行了开发和描述。最后给出了算法的一些仿真结果。
{"title":"Pattern theory for character recognition","authors":"J. Jean, K. Xue, S. Goel","doi":"10.1109/ICNN.1994.374939","DOIUrl":"https://doi.org/10.1109/ICNN.1994.374939","url":null,"abstract":"Pattern theory is an engineering theory of algorithm design which provides a robust characterization of all types of patterns. Similar to logical neural networks, the theory can be used to generalize from a set of training data. However, it optimizes network architectures as well as the \"weights\" of the resulting machine. In this paper, the application of the theory to character recognition is considered. The application requires a simple extension to the theory and a faster algorithm to perform a basic decomposition operation. Such an algorithm is developed and described in the paper. Some simulation results of the algorithm are also included.<<ETX>>","PeriodicalId":209128,"journal":{"name":"Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN'94)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130327899","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Learning and tuning fuzzy logic controllers through genetic algorithm 利用遗传算法对模糊控制器进行学习和整定
Pub Date : 1994-06-27 DOI: 10.1109/ICNN.1994.374400
Shuqing Zeng, Yongbao He
This paper reviews the current fuzzy control technology from the engineering point of view, and presents a new method for learning and tuning a fuzzy controller based on genetic algorithm (GA) for a dynamic system. In particular, it enhances the fuzzy controller with self-learning capability for achieving the prescribed control objective into near optimal manner. The methodology first adopts expert experiences, it then uses the GA to find the fuzzy controller's optimal set of parameters. In using GA, we must define an objective function to measure the performance of the controller. Since the behaviour of the dynamic system is hard to predict, a three-layer forward network has been adopted. For the purpose to accelerate the learning process, a conventional simplex optimal algorithm is used to reduce the search space. Finally, an example is given to show the potential of the method.<>
本文从工程的角度对当前模糊控制技术进行了综述,提出了一种基于遗传算法的动态系统模糊控制器学习与整定的新方法。特别地,它使具有自学习能力的模糊控制器以接近最优的方式达到预定的控制目标。该方法首先采用专家经验,然后利用遗传算法求解模糊控制器的最优参数集。在使用遗传算法时,我们必须定义一个目标函数来衡量控制器的性能。由于动态系统的行为难以预测,采用了三层前向网络。为了加快学习过程,采用传统的单纯形优化算法来减小搜索空间。最后,通过一个算例说明了该方法的可行性。
{"title":"Learning and tuning fuzzy logic controllers through genetic algorithm","authors":"Shuqing Zeng, Yongbao He","doi":"10.1109/ICNN.1994.374400","DOIUrl":"https://doi.org/10.1109/ICNN.1994.374400","url":null,"abstract":"This paper reviews the current fuzzy control technology from the engineering point of view, and presents a new method for learning and tuning a fuzzy controller based on genetic algorithm (GA) for a dynamic system. In particular, it enhances the fuzzy controller with self-learning capability for achieving the prescribed control objective into near optimal manner. The methodology first adopts expert experiences, it then uses the GA to find the fuzzy controller's optimal set of parameters. In using GA, we must define an objective function to measure the performance of the controller. Since the behaviour of the dynamic system is hard to predict, a three-layer forward network has been adopted. For the purpose to accelerate the learning process, a conventional simplex optimal algorithm is used to reduce the search space. Finally, an example is given to show the potential of the method.<<ETX>>","PeriodicalId":209128,"journal":{"name":"Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN'94)","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130364586","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 17
A neural network based real-time robot tracking controller using position sensitive detectors 一种基于位置敏感检测器的神经网络机器人实时跟踪控制器
Pub Date : 1994-06-27 DOI: 10.1109/ICNN.1994.374666
Hyoung‐Gweon Park, Se-Young Oh
A real-time visual servo tracking system for an industrial robot has been developed. The position sensitive detector or PSD, instead of the CCD, is used as a real time vision sensor due to its fast response (The position is converted to analog current). A neural network learns the complex association between the object position and its sensor reading and uses it to track that object. It also turns out that this scheme lends itself to a convenient way to teach a workpath for the robot. Furthermore, for real-time use of the neural net, a novel architecture has been developed based on the concept of input space partitioning and local learning. It exhibits characteristics of fast processing and learning as well as optimal usage of hidden neurons.<>
研制了一种用于工业机器人的实时视觉伺服跟踪系统。位置敏感检测器或PSD,而不是CCD,由于其快速响应(位置转换为模拟电流)被用作实时视觉传感器。神经网络学习物体位置与其传感器读数之间的复杂关联,并使用它来跟踪该物体。事实证明,这种方案本身是一种方便的方式来教授机器人的工作路径。此外,为了实时使用神经网络,基于输入空间划分和局部学习的概念,开发了一种新的神经网络结构。它具有快速处理和学习的特点,以及对隐藏神经元的最佳利用。
{"title":"A neural network based real-time robot tracking controller using position sensitive detectors","authors":"Hyoung‐Gweon Park, Se-Young Oh","doi":"10.1109/ICNN.1994.374666","DOIUrl":"https://doi.org/10.1109/ICNN.1994.374666","url":null,"abstract":"A real-time visual servo tracking system for an industrial robot has been developed. The position sensitive detector or PSD, instead of the CCD, is used as a real time vision sensor due to its fast response (The position is converted to analog current). A neural network learns the complex association between the object position and its sensor reading and uses it to track that object. It also turns out that this scheme lends itself to a convenient way to teach a workpath for the robot. Furthermore, for real-time use of the neural net, a novel architecture has been developed based on the concept of input space partitioning and local learning. It exhibits characteristics of fast processing and learning as well as optimal usage of hidden neurons.<<ETX>>","PeriodicalId":209128,"journal":{"name":"Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN'94)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126700737","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
High capacity for the Hopfield neural networks Hopfield神经网络的高容量
Pub Date : 1994-06-27 DOI: 10.1109/ICNN.1994.374349
Chang-Jiu Chen, J. Cheung, A. Haque
In this paper, we employ the memorized vectors in our high capacity model to apply to the Hopfield model. We find that the Hopfield model can also have a high capacity.<>
在本文中,我们将高容量模型中的记忆向量应用于Hopfield模型。我们发现Hopfield模型也可以具有高容量。
{"title":"High capacity for the Hopfield neural networks","authors":"Chang-Jiu Chen, J. Cheung, A. Haque","doi":"10.1109/ICNN.1994.374349","DOIUrl":"https://doi.org/10.1109/ICNN.1994.374349","url":null,"abstract":"In this paper, we employ the memorized vectors in our high capacity model to apply to the Hopfield model. We find that the Hopfield model can also have a high capacity.<<ETX>>","PeriodicalId":209128,"journal":{"name":"Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN'94)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126876312","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Illusory contour detection using MRF models 磁共振成像模型的虚幻轮廓检测
Pub Date : 1994-06-27 DOI: 10.1109/ICNN.1994.374966
S. Madarasmi, T. Pong, D. Kersten
This paper presents a computational model for obtaining relative depth information from image contours. Local occlusion properties such as T-junctions and concavity are used to arrive at a global percept of distinct surfaces at various relative depths. A multilayer representation is used to classify each image pixel into the appropriate depth plane based on the local information from the occluding contours. A Bayesian framework is used to incorporate the constraints defined by the contours and the prior constraints. A solution corresponding to the maximum posteriori probability is then determined, resulting in a depth assignment and surface assignment for each image site or pixel. The algorithm was tested on various contour images, including two classes of illusory surfaces: the Kanizsa (1979) and the line termination illusory contours.<>
本文提出了一种从图像轮廓中获取相对深度信息的计算模型。局部遮挡特性,如t结点和凹性,用于在不同的相对深度到达不同表面的全局感知。基于遮挡轮廓的局部信息,采用多层表示将每个图像像素划分到合适的深度平面。使用贝叶斯框架将轮廓定义的约束和先验约束结合起来。然后确定对应于最大后验概率的解决方案,从而为每个图像站点或像素进行深度分配和表面分配。该算法在各种轮廓图像上进行了测试,包括两类虚幻表面:Kanizsa(1979)和线终止虚幻轮廓。
{"title":"Illusory contour detection using MRF models","authors":"S. Madarasmi, T. Pong, D. Kersten","doi":"10.1109/ICNN.1994.374966","DOIUrl":"https://doi.org/10.1109/ICNN.1994.374966","url":null,"abstract":"This paper presents a computational model for obtaining relative depth information from image contours. Local occlusion properties such as T-junctions and concavity are used to arrive at a global percept of distinct surfaces at various relative depths. A multilayer representation is used to classify each image pixel into the appropriate depth plane based on the local information from the occluding contours. A Bayesian framework is used to incorporate the constraints defined by the contours and the prior constraints. A solution corresponding to the maximum posteriori probability is then determined, resulting in a depth assignment and surface assignment for each image site or pixel. The algorithm was tested on various contour images, including two classes of illusory surfaces: the Kanizsa (1979) and the line termination illusory contours.<<ETX>>","PeriodicalId":209128,"journal":{"name":"Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN'94)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129122680","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
期刊
Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN'94)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1