首页 > 最新文献

Proceedings of the 9th International Conference on Neural Information Processing, 2002. ICONIP '02.最新文献

英文 中文
Extensions of Lagrange programming neural network for satisfiability problem and its several variations 拉格朗日规划神经网络在可满足性问题上的扩展及其几种变体
M. Nagamatu, T. Nakano, N. Hamada, T. Kido, T. Akahoshi
The satisfiability problem (SAT) of the propositional calculus is a well-known NP-complete problem. It requires exponential computation time as the problem size increases. We proposed a neural network, called LPPH, for the SAT. The equilibrium point of the dynamics of the LPPH exactly corresponds to the solution of the SAT, and the dynamics does not stop at any point that is not the solution of the SAT. Experimental results show the effectiveness of the LPPH for solving the SAT. In this paper we extend the dynamics of the LPPH to solve several variations of the SAT, such as, the SAT with an objective function, the SAT with a preliminary solution, and the MAX-SAT. The effectiveness of the extensions is shown by the experiments.
命题微积分的可满足性问题是一个著名的np完全问题。随着问题规模的增加,它需要指数级的计算时间。叫做LPPH,我们提出了一个神经网络的平衡点坐。LPPH完全对应的动态SAT的解决方案,和动力学不停止在任何时候坐的不是解决方案。实验结果表明LPPH求解SAT的有效性。在本文中,我们扩展的动态LPPH解决坐上的变化,例如,坐着一个目标函数,坐着一个初步的解决方案,和MAX-SAT。实验结果表明了扩展算法的有效性。
{"title":"Extensions of Lagrange programming neural network for satisfiability problem and its several variations","authors":"M. Nagamatu, T. Nakano, N. Hamada, T. Kido, T. Akahoshi","doi":"10.1109/ICONIP.2002.1198980","DOIUrl":"https://doi.org/10.1109/ICONIP.2002.1198980","url":null,"abstract":"The satisfiability problem (SAT) of the propositional calculus is a well-known NP-complete problem. It requires exponential computation time as the problem size increases. We proposed a neural network, called LPPH, for the SAT. The equilibrium point of the dynamics of the LPPH exactly corresponds to the solution of the SAT, and the dynamics does not stop at any point that is not the solution of the SAT. Experimental results show the effectiveness of the LPPH for solving the SAT. In this paper we extend the dynamics of the LPPH to solve several variations of the SAT, such as, the SAT with an objective function, the SAT with a preliminary solution, and the MAX-SAT. The effectiveness of the extensions is shown by the experiments.","PeriodicalId":146553,"journal":{"name":"Proceedings of the 9th International Conference on Neural Information Processing, 2002. ICONIP '02.","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115483981","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Increasing the topological quality of Kohonen's self organising map by using a hit term 通过使用命中项提高Kohonen自组织映射的拓扑质量
E. Germen
The quality of the topology obtained at the end of the training period of Kohonen's self organizing map (SOM) is highly dependent on the learning rate and neighborhood function that are chosen at the beginning. The conventional approaches to determine those parameters do not account for the data statistics and the topological characterization of the neurons. The paper proposes a new parameter, which depends on the hit ratio among the updated neuron and the best matching neuron. It has been shown that by using this parameter with the conventional learning rate and neighborhood functions, much more adequate solution can be obtained since it deserves an information about data statistics during adaptation process.
Kohonen自组织映射(SOM)训练结束时得到的拓扑质量高度依赖于开始时选择的学习率和邻域函数。确定这些参数的传统方法没有考虑到数据统计和神经元的拓扑特征。本文提出了一个新的参数,该参数取决于更新神经元与最佳匹配神经元之间的命中率。结果表明,该参数与传统的学习率函数和邻域函数结合使用,可以得到更充分的解,因为它在自适应过程中需要得到数据统计信息。
{"title":"Increasing the topological quality of Kohonen's self organising map by using a hit term","authors":"E. Germen","doi":"10.1109/ICONIP.2002.1198197","DOIUrl":"https://doi.org/10.1109/ICONIP.2002.1198197","url":null,"abstract":"The quality of the topology obtained at the end of the training period of Kohonen's self organizing map (SOM) is highly dependent on the learning rate and neighborhood function that are chosen at the beginning. The conventional approaches to determine those parameters do not account for the data statistics and the topological characterization of the neurons. The paper proposes a new parameter, which depends on the hit ratio among the updated neuron and the best matching neuron. It has been shown that by using this parameter with the conventional learning rate and neighborhood functions, much more adequate solution can be obtained since it deserves an information about data statistics during adaptation process.","PeriodicalId":146553,"journal":{"name":"Proceedings of the 9th International Conference on Neural Information Processing, 2002. ICONIP '02.","volume":"170 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115700587","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
K-Means Fast Learning Artificial Neural Network, an alternative network for classification K-Means快速学习人工神经网络,一种用于分类的替代网络
A. Phuan, S. Prakash
The K-Means Fast Learning Artificial Neural Network (K-FLANN) is an improvement of the original FLANN II (Tay and Evans, 1994). While FLANN II develops inconsistencies in clustering, influenced by data arrangements, K-FLANN bolsters this issue, through relocation of the clustered centroids. Results of the investigation are presented along with a discussion of the fundamental behavior of K-FLANN. Comparisons are made with the K-Means Clustering algorithm and the Kohonen SOM. A further discussion is provided on how K-FLANN can qualify as an alternative method for fast classification.
K-Means快速学习人工神经网络(K-FLANN)是对原始FLANN II的改进(Tay和Evans, 1994)。虽然FLANN II受到数据排列的影响,在聚类中产生不一致,但K-FLANN通过重新定位聚类质心来支持这一问题。本文给出了研究结果,并讨论了K-FLANN的基本行为。并与K-Means聚类算法和Kohonen SOM算法进行了比较。进一步讨论了K-FLANN如何成为快速分类的替代方法。
{"title":"K-Means Fast Learning Artificial Neural Network, an alternative network for classification","authors":"A. Phuan, S. Prakash","doi":"10.1109/ICONIP.2002.1198196","DOIUrl":"https://doi.org/10.1109/ICONIP.2002.1198196","url":null,"abstract":"The K-Means Fast Learning Artificial Neural Network (K-FLANN) is an improvement of the original FLANN II (Tay and Evans, 1994). While FLANN II develops inconsistencies in clustering, influenced by data arrangements, K-FLANN bolsters this issue, through relocation of the clustered centroids. Results of the investigation are presented along with a discussion of the fundamental behavior of K-FLANN. Comparisons are made with the K-Means Clustering algorithm and the Kohonen SOM. A further discussion is provided on how K-FLANN can qualify as an alternative method for fast classification.","PeriodicalId":146553,"journal":{"name":"Proceedings of the 9th International Conference on Neural Information Processing, 2002. ICONIP '02.","volume":"413 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124415985","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
MR brain image segmentation by adaptive mixture distribution 基于自适应混合分布的MR脑图像分割
Juin-Der Lee, P. Cheng, M. Liou
The Box-Cox transformation is applied to fit a Gaussian mixture distribution to the brain image intensity data. The advantage of using such data-adaptive mixture model is evidenced by yielding better image segmentation results compared to the existing EM procedures using standard Gaussian mixture distribution.
应用Box-Cox变换拟合脑图像强度数据的高斯混合分布。与使用标准高斯混合分布的现有EM方法相比,使用这种数据自适应混合模型的优势得到了更好的图像分割结果。
{"title":"MR brain image segmentation by adaptive mixture distribution","authors":"Juin-Der Lee, P. Cheng, M. Liou","doi":"10.1109/ICONIP.2002.1202163","DOIUrl":"https://doi.org/10.1109/ICONIP.2002.1202163","url":null,"abstract":"The Box-Cox transformation is applied to fit a Gaussian mixture distribution to the brain image intensity data. The advantage of using such data-adaptive mixture model is evidenced by yielding better image segmentation results compared to the existing EM procedures using standard Gaussian mixture distribution.","PeriodicalId":146553,"journal":{"name":"Proceedings of the 9th International Conference on Neural Information Processing, 2002. ICONIP '02.","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124455976","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A dynamic neural network model on global-to-local interaction over time course 全局到局部相互作用的动态神经网络模型
Kangwoo Lee, Jianfeng Feng, H. Buxton
We propose a neural network model based on contextual learning and non-leaky integrate-and-fire (IF) model. The model shows dynamic properties that integrate the inputs from its own module as well as the other module over time. Moreover, the integration of inputs from different modules is not simple accumulation of activation over the time course but depends on the interaction between primary input that the behaviour of a modular network should be based on, and the contextual input that facilitates or interferes with the performance of the modular network. The learning rule is derived under the assumption that time scale of the interval to first spike can be adjusted during the learning process. The model is applied to explain global-to-local processing of Navon type stimuli in which a global letter hierarchically consists of local letters. The model provides interesting insights that may underlie asymmetric response of global and local interaction found in many psychophysical and neuropsychological studies.
我们提出了一种基于上下文学习和非泄漏集成-点火(IF)模型的神经网络模型。该模型显示了动态属性,这些属性随着时间的推移集成了来自其自身模块和其他模块的输入。此外,来自不同模块的输入的整合不是简单的激活积累,而是取决于模块网络行为应基于的主要输入与促进或干扰模块网络性能的上下文输入之间的相互作用。在假设学习过程中间隔到第一峰值的时间尺度可以调整的前提下,推导出学习规则。该模型用于解释Navon类型刺激的全局到局部处理,其中全局字母分层由局部字母组成。该模型提供了有趣的见解,可能是在许多心理物理和神经心理学研究中发现的全局和局部相互作用的不对称反应的基础。
{"title":"A dynamic neural network model on global-to-local interaction over time course","authors":"Kangwoo Lee, Jianfeng Feng, H. Buxton","doi":"10.1109/ICONIP.2002.1202819","DOIUrl":"https://doi.org/10.1109/ICONIP.2002.1202819","url":null,"abstract":"We propose a neural network model based on contextual learning and non-leaky integrate-and-fire (IF) model. The model shows dynamic properties that integrate the inputs from its own module as well as the other module over time. Moreover, the integration of inputs from different modules is not simple accumulation of activation over the time course but depends on the interaction between primary input that the behaviour of a modular network should be based on, and the contextual input that facilitates or interferes with the performance of the modular network. The learning rule is derived under the assumption that time scale of the interval to first spike can be adjusted during the learning process. The model is applied to explain global-to-local processing of Navon type stimuli in which a global letter hierarchically consists of local letters. The model provides interesting insights that may underlie asymmetric response of global and local interaction found in many psychophysical and neuropsychological studies.","PeriodicalId":146553,"journal":{"name":"Proceedings of the 9th International Conference on Neural Information Processing, 2002. ICONIP '02.","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121829059","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Neural network methods for radar processing 雷达处理的神经网络方法
A. L. Tatuzov
There are significant difficulties in radar automatic data processing arising from poor flexibility of known algorithms and low computational capacity of traditional computer devices. Neural networks can help the radar designer to overcome these difficulties as a result of computational power of neural parallel hardware and adaptive capabilities of neural algorithms. The idea of neural net application in the most difficult radar problems is proposed and analyzed. Some examples of neural methods for radar information processing are proposed and discussed: phase array antenna weights adaptation, genetic algorithms for optimization of multibased coded signals, data associations in multitarget environment, neural training for decision making systems. Results of the analysis for proposed methods prove that a considerable increase in efficiency can be achieved when neural networks are used for radar information processing problems.
由于现有算法的灵活性差和传统计算机设备的计算能力低,雷达自动数据处理存在很大的困难。由于神经并行硬件的计算能力和神经算法的自适应能力,神经网络可以帮助雷达设计人员克服这些困难。提出并分析了神经网络在最困难雷达问题中的应用思想。提出并讨论了雷达信息处理中的一些神经方法:相控阵天线权值自适应、多基编码信号优化的遗传算法、多目标环境下的数据关联、决策系统的神经训练。对所提方法的分析结果表明,将神经网络应用于雷达信息处理问题,可以显著提高效率。
{"title":"Neural network methods for radar processing","authors":"A. L. Tatuzov","doi":"10.1109/ICONIP.2002.1198969","DOIUrl":"https://doi.org/10.1109/ICONIP.2002.1198969","url":null,"abstract":"There are significant difficulties in radar automatic data processing arising from poor flexibility of known algorithms and low computational capacity of traditional computer devices. Neural networks can help the radar designer to overcome these difficulties as a result of computational power of neural parallel hardware and adaptive capabilities of neural algorithms. The idea of neural net application in the most difficult radar problems is proposed and analyzed. Some examples of neural methods for radar information processing are proposed and discussed: phase array antenna weights adaptation, genetic algorithms for optimization of multibased coded signals, data associations in multitarget environment, neural training for decision making systems. Results of the analysis for proposed methods prove that a considerable increase in efficiency can be achieved when neural networks are used for radar information processing problems.","PeriodicalId":146553,"journal":{"name":"Proceedings of the 9th International Conference on Neural Information Processing, 2002. ICONIP '02.","volume":"188 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115769864","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Low power design using architecture and circuit level approaches 使用架构和电路级方法进行低功耗设计
Dong-Sun Kim, Jin-Tea Kim, Ki-Won Kwon, Duck-Jin Chung
The purpose of this paper is to propose the methodology of low-power circuit design in the aspect of the architecture and circuit level. Recently, more rapid computations are very important event in DSP, image processing and multi-purpose processor. So, it is very important to reduce power consumption in digital circuits and to maintain computational throughput. For this reason, the design experience and research in the early 1990s has demonstrated that doing so requires a "power conscious" design methodology that addresses dissipation at every level of the design hierarchy. Evidently, many pass transistor logic are proposed for reducing the power consumption and circuit size. In this paper, we introduce the methodologies for low-power using pass-transistor and SDD (Signal Dependency Diagram) technique for parallel and pipelined architecture.
本文从体系结构和电路层面提出了低功耗电路设计的方法。近年来,快速计算是数字信号处理、图像处理和多用途处理器领域的一个重要发展方向。因此,降低数字电路的功耗和保持计算吞吐量是非常重要的。由于这个原因,20世纪90年代早期的设计经验和研究表明,要做到这一点,需要一种“功率意识”的设计方法,在设计层次的每个层次上解决耗散问题。显然,许多通过晶体管逻辑被提出,以减少功耗和电路尺寸。在本文中,我们介绍了使用通管和SDD(信号依赖图)技术实现并行和流水线架构的低功耗方法。
{"title":"Low power design using architecture and circuit level approaches","authors":"Dong-Sun Kim, Jin-Tea Kim, Ki-Won Kwon, Duck-Jin Chung","doi":"10.1109/ICONIP.2002.1198150","DOIUrl":"https://doi.org/10.1109/ICONIP.2002.1198150","url":null,"abstract":"The purpose of this paper is to propose the methodology of low-power circuit design in the aspect of the architecture and circuit level. Recently, more rapid computations are very important event in DSP, image processing and multi-purpose processor. So, it is very important to reduce power consumption in digital circuits and to maintain computational throughput. For this reason, the design experience and research in the early 1990s has demonstrated that doing so requires a \"power conscious\" design methodology that addresses dissipation at every level of the design hierarchy. Evidently, many pass transistor logic are proposed for reducing the power consumption and circuit size. In this paper, we introduce the methodologies for low-power using pass-transistor and SDD (Signal Dependency Diagram) technique for parallel and pipelined architecture.","PeriodicalId":146553,"journal":{"name":"Proceedings of the 9th International Conference on Neural Information Processing, 2002. ICONIP '02.","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116705906","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Time constrain optimal method to find the minimum architectures for feedforward neural networks 前馈神经网络最小结构的时间约束优化方法
Teck-Sun Tan, G. Huang
Huang, et al. (1996, 2002) proposed architecture selection algorithm called SEDNN to find the minimum architectures for feedforward neural networks based on the Golden section search method and the upper bounds on the number of hidden neurons, as stated in Huang (2002) and Huang et al. (1998), to be 2/spl radic/((m + 2)N) or two layered feedforward network (TLFN) and N for single layer feedforward network (SLFN) where N is the number of training samples and m is the number of output neurons. The SEDNN algorithm worked well with the assumption that time allowed for the execution of the algorithm is infinite. This paper proposed an algorithm similar to the SEDNN, but with an added time factor to cater for applications that requires results within a specified period of time.
黄,et al .(1996, 2002)提出的架构选择算法称为SEDNN找到最低架构前馈神经网络基于黄金分割搜索方法和上界隐藏神经元的数量,所黄黄(2002)和et al。(1998),2 / spl·拉迪奇/ ((m + 2) N)或两层前馈网络为单层前馈网络(TLFN)和N (SLFN),其中N是训练样本的数量和m输出神经元的数量。SEDNN算法在允许执行算法的时间是无限的假设下工作得很好。本文提出了一种类似于SEDNN的算法,但增加了一个时间因子,以满足需要在指定时间段内得到结果的应用。
{"title":"Time constrain optimal method to find the minimum architectures for feedforward neural networks","authors":"Teck-Sun Tan, G. Huang","doi":"10.1109/ICONIP.2002.1202189","DOIUrl":"https://doi.org/10.1109/ICONIP.2002.1202189","url":null,"abstract":"Huang, et al. (1996, 2002) proposed architecture selection algorithm called SEDNN to find the minimum architectures for feedforward neural networks based on the Golden section search method and the upper bounds on the number of hidden neurons, as stated in Huang (2002) and Huang et al. (1998), to be 2/spl radic/((m + 2)N) or two layered feedforward network (TLFN) and N for single layer feedforward network (SLFN) where N is the number of training samples and m is the number of output neurons. The SEDNN algorithm worked well with the assumption that time allowed for the execution of the algorithm is infinite. This paper proposed an algorithm similar to the SEDNN, but with an added time factor to cater for applications that requires results within a specified period of time.","PeriodicalId":146553,"journal":{"name":"Proceedings of the 9th International Conference on Neural Information Processing, 2002. ICONIP '02.","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116916959","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
A quantized chaotic spiking neuron and CDMA coding 一种量化混沌尖峰神经元与CDMA编码
R. Furumachi, H. Torikai, T. Saito
Applying a higher frequency input to a chaotic spiking neuron, the state is quantized and the chaotic pulse-train is changed into various co-existing super-stable periodic pulse-trains (SSPTs). Using a quantized pulse-position map, the number of the SSPTs and their periods are clarified theoretically. Multiplex correlation characteristics for some set of the SSPTs is also clarified for application to CDMA communication systems.
在混沌脉冲神经元中加入更高频率的输入,将混沌脉冲序列进行状态量化,将混沌脉冲序列转化为多个共存的超稳定周期脉冲序列(SSPTs)。利用量子化的脉冲位置图,从理论上阐明了sspt的数目及其周期。为了在CDMA通信系统中应用,阐明了部分sspt的多路相关特性。
{"title":"A quantized chaotic spiking neuron and CDMA coding","authors":"R. Furumachi, H. Torikai, T. Saito","doi":"10.1109/ICONIP.2002.1198119","DOIUrl":"https://doi.org/10.1109/ICONIP.2002.1198119","url":null,"abstract":"Applying a higher frequency input to a chaotic spiking neuron, the state is quantized and the chaotic pulse-train is changed into various co-existing super-stable periodic pulse-trains (SSPTs). Using a quantized pulse-position map, the number of the SSPTs and their periods are clarified theoretically. Multiplex correlation characteristics for some set of the SSPTs is also clarified for application to CDMA communication systems.","PeriodicalId":146553,"journal":{"name":"Proceedings of the 9th International Conference on Neural Information Processing, 2002. ICONIP '02.","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117117275","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Focusing on soft-computing techniques to model the role of context in determining colours 专注于软计算技术来模拟环境在确定颜色中的作用
E.R. Denby
This paper describes an initial study to investigate the role of context in determining colours from a machine learning perspective. A soft-computing technique in the form of fuzzy neural networks is used to perform the intelligent processing of categorising colours given some training. The main hypothesis suggests that the neural network will not perform as well as a human familiar with the NCS colour space, because humans possess context knowledge needed to correctly classify any colour variety into eleven groupings. This paper describes the process taken to create the dataset suitable for the network, and reports on the use of the software called FuzzyCOPE 3/sup /spl copy// to investigate this hypothesis. Further, it points to issues such as what is context knowledge? Can the network's learning be said to possess contextual knowledge of the colour space?.
本文描述了一项初步研究,从机器学习的角度调查背景在确定颜色中的作用。在一定的训练条件下,采用模糊神经网络的软计算技术进行颜色分类的智能处理。主要假设表明,神经网络的表现不如熟悉NCS颜色空间的人类,因为人类拥有将任何颜色正确分类为11组所需的上下文知识。本文描述了创建适合网络的数据集的过程,并报告了使用名为FuzzyCOPE 3/sup /spl copy//的软件来调查这一假设。此外,它还指出了诸如什么是上下文知识之类的问题?网络的学习是否可以说具有色彩空间的上下文知识?
{"title":"Focusing on soft-computing techniques to model the role of context in determining colours","authors":"E.R. Denby","doi":"10.1109/ICONIP.2002.1198144","DOIUrl":"https://doi.org/10.1109/ICONIP.2002.1198144","url":null,"abstract":"This paper describes an initial study to investigate the role of context in determining colours from a machine learning perspective. A soft-computing technique in the form of fuzzy neural networks is used to perform the intelligent processing of categorising colours given some training. The main hypothesis suggests that the neural network will not perform as well as a human familiar with the NCS colour space, because humans possess context knowledge needed to correctly classify any colour variety into eleven groupings. This paper describes the process taken to create the dataset suitable for the network, and reports on the use of the software called FuzzyCOPE 3/sup /spl copy// to investigate this hypothesis. Further, it points to issues such as what is context knowledge? Can the network's learning be said to possess contextual knowledge of the colour space?.","PeriodicalId":146553,"journal":{"name":"Proceedings of the 9th International Conference on Neural Information Processing, 2002. ICONIP '02.","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121244280","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Proceedings of the 9th International Conference on Neural Information Processing, 2002. ICONIP '02.
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1