首页 > 最新文献

Proceedings of the 2002 International Joint Conference on Neural Networks. IJCNN'02 (Cat. No.02CH37290)最新文献

英文 中文
A new approach for solving large traveling salesman problem using evolutionary ant rules 基于进化蚁规则求解大型旅行商问题的新方法
Cheng-Fa Tsai, Chun-Wei Tsai
This paper presents a new metaheuristic method called EA algorithm for solving the TSP (traveling salesman problem). We introduce a genetic exploitation mechanism in ant colony system from genetic algorithm to search solutions space for solving the traveling salesman problem. In addition, we present a method called nearest neighbor (NN) to EA to improve TSPs thus obtain good solutions quickly. According to our simulation results, the EA algorithm outperforms the ant colony system (ACS) in tour length comparison of traveling salesman problem. In this work it is observed that EA or ACS with NN approach as initial solutions can provide a significant improvement for obtaining a global optimum solution or a near global optimum solution in large TSPs.
本文提出了一种求解旅行商问题的元启发式算法EA算法。从遗传算法中引入蚁群系统的遗传开发机制来搜索求解旅行商问题的解空间。此外,我们提出了一种称为最近邻(NN)的EA方法来改进tsp,从而快速得到好的解。仿真结果表明,EA算法在旅行商问题的行程比较中优于蚁群算法(ACS)。在这项工作中,观察到以神经网络方法作为初始解的EA或ACS可以为获得大型tsp的全局最优解或近全局最优解提供显着改进。
{"title":"A new approach for solving large traveling salesman problem using evolutionary ant rules","authors":"Cheng-Fa Tsai, Chun-Wei Tsai","doi":"10.1109/IJCNN.2002.1007746","DOIUrl":"https://doi.org/10.1109/IJCNN.2002.1007746","url":null,"abstract":"This paper presents a new metaheuristic method called EA algorithm for solving the TSP (traveling salesman problem). We introduce a genetic exploitation mechanism in ant colony system from genetic algorithm to search solutions space for solving the traveling salesman problem. In addition, we present a method called nearest neighbor (NN) to EA to improve TSPs thus obtain good solutions quickly. According to our simulation results, the EA algorithm outperforms the ant colony system (ACS) in tour length comparison of traveling salesman problem. In this work it is observed that EA or ACS with NN approach as initial solutions can provide a significant improvement for obtaining a global optimum solution or a near global optimum solution in large TSPs.","PeriodicalId":382771,"journal":{"name":"Proceedings of the 2002 International Joint Conference on Neural Networks. IJCNN'02 (Cat. No.02CH37290)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133209264","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 36
Increased performance with neural nets - an example from the marketing domain 用神经网络提高性能——来自营销领域的一个例子
U. Johansson, L. Niklasson
This paper shows that artificial neural networks can exploit the temporal structure in the domain of marketing investments. Two architectures are compared; a tapped delay neural network and simple recurrent net. The performance is evaluated, and the method for extending it is suggested. The method uses a sensitivity analysis and identifies which input parameters that could be removed for increased performance.
本文表明,人工神经网络可以利用营销投资领域的时间结构。比较了两种体系结构;抽头延迟神经网络和简单递归网络。对其性能进行了评价,并提出了扩展方法。该方法使用敏感性分析,并确定哪些输入参数可以被删除以提高性能。
{"title":"Increased performance with neural nets - an example from the marketing domain","authors":"U. Johansson, L. Niklasson","doi":"10.1109/IJCNN.2002.1007771","DOIUrl":"https://doi.org/10.1109/IJCNN.2002.1007771","url":null,"abstract":"This paper shows that artificial neural networks can exploit the temporal structure in the domain of marketing investments. Two architectures are compared; a tapped delay neural network and simple recurrent net. The performance is evaluated, and the method for extending it is suggested. The method uses a sensitivity analysis and identifies which input parameters that could be removed for increased performance.","PeriodicalId":382771,"journal":{"name":"Proceedings of the 2002 International Joint Conference on Neural Networks. IJCNN'02 (Cat. No.02CH37290)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133980345","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Training a kind of hybrid universal learning networks with classification problems 训练一类具有分类问题的混合通用学习网络
D. Li, K. Hirasawa, J. Hu, J. Murata
In the search for even better parsimonious neural network modeling, this paper describes a novel approach which attempts to exploit redundancy found in the conventional sigmoidal networks. A hybrid universal learning network constructed by the combination of proposed multiplication units with summation units is trained for several classification problems. It is clarified that the multiplication units in different layers in the network improve the performance of the network.
在寻找更好的简化神经网络建模的过程中,本文描述了一种新的方法,该方法试图利用传统s型网络中的冗余。结合所提出的乘法单元和求和单元构建了一个混合通用学习网络,并对多个分类问题进行了训练。明确了网络中不同层的乘法单元提高了网络的性能。
{"title":"Training a kind of hybrid universal learning networks with classification problems","authors":"D. Li, K. Hirasawa, J. Hu, J. Murata","doi":"10.1109/IJCNN.2002.1005559","DOIUrl":"https://doi.org/10.1109/IJCNN.2002.1005559","url":null,"abstract":"In the search for even better parsimonious neural network modeling, this paper describes a novel approach which attempts to exploit redundancy found in the conventional sigmoidal networks. A hybrid universal learning network constructed by the combination of proposed multiplication units with summation units is trained for several classification problems. It is clarified that the multiplication units in different layers in the network improve the performance of the network.","PeriodicalId":382771,"journal":{"name":"Proceedings of the 2002 International Joint Conference on Neural Networks. IJCNN'02 (Cat. No.02CH37290)","volume":"64 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134499373","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Silicon retina system applicable to robot vision 适用于机器人视觉的硅视网膜系统
K. Shimonomura, S. Kameda, T. Yagi
A novel robot vision system was configured using a silicon retina and FPGA circuit. Silicon retina has been developed to mimic the parallel circuit structure of the vertebrate retina. The silicon retina used here is an analog CMOS very largescale integrated circuit which executes Laplacian-Gaussian (/spl nabla//sup 2/G)-like filtering and frame subtraction on the image in real time. FPGA circuit controls a silicon retina and executes image processing depending on application of the system. This robot vision system can achieve real time and robust computations under natural illumination with a compact hardware and a low power consumption.
采用硅视网膜和FPGA电路构成了一种新型的机器人视觉系统。硅视网膜已经发展到模仿平行电路结构的脊椎动物视网膜。这里使用的硅视网膜是一个模拟CMOS超大规模集成电路,可以实时对图像执行类似拉普拉斯-高斯(/spl nabla//sup 2/G)的滤波和帧减法。FPGA电路控制硅视网膜,并根据系统的应用执行图像处理。该机器人视觉系统具有硬件紧凑、功耗低的特点,能够在自然光照下实现实时、鲁棒的计算。
{"title":"Silicon retina system applicable to robot vision","authors":"K. Shimonomura, S. Kameda, T. Yagi","doi":"10.1109/IJCNN.2002.1007496","DOIUrl":"https://doi.org/10.1109/IJCNN.2002.1007496","url":null,"abstract":"A novel robot vision system was configured using a silicon retina and FPGA circuit. Silicon retina has been developed to mimic the parallel circuit structure of the vertebrate retina. The silicon retina used here is an analog CMOS very largescale integrated circuit which executes Laplacian-Gaussian (/spl nabla//sup 2/G)-like filtering and frame subtraction on the image in real time. FPGA circuit controls a silicon retina and executes image processing depending on application of the system. This robot vision system can achieve real time and robust computations under natural illumination with a compact hardware and a low power consumption.","PeriodicalId":382771,"journal":{"name":"Proceedings of the 2002 International Joint Conference on Neural Networks. IJCNN'02 (Cat. No.02CH37290)","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134525944","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
A SAM-SOM family: incorporating spatial access methods into constructive self-organizing maps 一个SAM-SOM家族:将空间访问方法纳入建设性自组织地图
E. Cuadros-Vargas, R.A.F. Romero
Self-organizing maps (SOM) perform similarity information retrieval, but they cannot answer questions like k-nearest neighbors easily. This paper presents a new family of constructive SOM called SAM-SOM family which incorporates spatial access methods to perform more specific queries like k-NN and range queries. Using this family of networks, the patterns have to be presented only once. This approach speeds up dramatically the SOM training process with a minimal number of parameters.
自组织地图(SOM)可以进行相似性信息检索,但不能很容易地回答k近邻等问题。本文提出了一种新的建设性SOM族,称为SAM-SOM族,它结合了空间访问方法来执行更具体的查询,如k-NN和范围查询。使用这个网络家族,模式只需要呈现一次。这种方法以最少的参数显著加快了SOM的训练过程。
{"title":"A SAM-SOM family: incorporating spatial access methods into constructive self-organizing maps","authors":"E. Cuadros-Vargas, R.A.F. Romero","doi":"10.1109/IJCNN.2002.1007660","DOIUrl":"https://doi.org/10.1109/IJCNN.2002.1007660","url":null,"abstract":"Self-organizing maps (SOM) perform similarity information retrieval, but they cannot answer questions like k-nearest neighbors easily. This paper presents a new family of constructive SOM called SAM-SOM family which incorporates spatial access methods to perform more specific queries like k-NN and range queries. Using this family of networks, the patterns have to be presented only once. This approach speeds up dramatically the SOM training process with a minimal number of parameters.","PeriodicalId":382771,"journal":{"name":"Proceedings of the 2002 International Joint Conference on Neural Networks. IJCNN'02 (Cat. No.02CH37290)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134543788","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
Adaptive behavior with fixed weights in RNN: an overview RNN中具有固定权重的自适应行为:概述
D. V. Prokhorov, L.A. Feldkarnp, I. Tyukin
In this paper we review recent results on the adaptive behavior attained with fixed-weight recurrent neural networks (meta-learning). We argue that such behavior is a natural consequence of prior training.
本文综述了固定权重递归神经网络(元学习)的自适应行为的最新研究成果。我们认为这种行为是事先训练的自然结果。
{"title":"Adaptive behavior with fixed weights in RNN: an overview","authors":"D. V. Prokhorov, L.A. Feldkarnp, I. Tyukin","doi":"10.1109/IJCNN.2002.1007449","DOIUrl":"https://doi.org/10.1109/IJCNN.2002.1007449","url":null,"abstract":"In this paper we review recent results on the adaptive behavior attained with fixed-weight recurrent neural networks (meta-learning). We argue that such behavior is a natural consequence of prior training.","PeriodicalId":382771,"journal":{"name":"Proceedings of the 2002 International Joint Conference on Neural Networks. IJCNN'02 (Cat. No.02CH37290)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134623763","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 50
Experimental analysis of support vector machines with different kernels based on non-intrusive monitoring data 基于非侵入式监测数据的不同核支持向量机实验分析
T. Onoda, H. Murata, Gunnar Rätsch, K. Muller
The estimation of the states of household electric appliances has served as the first application of support vector machines in the power system research field. Thus, it is imperative for power system research field to evaluate the support vector machine on this task from a practical point of view. We use the data proposed in Onoda and Ratsch (2000) for this purpose. We put particular emphasis on comparing different types of support vector machines obtained by choosing different kernels. We report results for polynomial kernels, radial basis function kernels, and sigmoid kernels. In the estimation of the states of household electric appliances, the results for the three different kernels achieved different error rates. We also put particular emphasis on comparing the different capacity of support vector machines obtained by choosing different regularization constants and parameters of kernels. The results show that the choice of regularization constants and parameters of kernels is as important as the choice of kernel functions for real world applications.
家用电器的状态估计是支持向量机在电力系统研究领域的第一个应用。因此,从实际应用的角度对支持向量机进行评价是电力系统研究领域的当务之急。为此,我们使用了Onoda和Ratsch(2000)提出的数据。我们特别强调了通过选择不同核得到的不同类型的支持向量机的比较。我们报告了多项式核、径向基函数核和sigmoid核的结果。在家用电器的状态估计中,三种不同的核函数得到的结果错误率不同。我们还重点比较了通过选择不同正则化常数和核参数得到的支持向量机的不同容量。结果表明,在实际应用中,正则化常数和核函数参数的选择与核函数的选择同样重要。
{"title":"Experimental analysis of support vector machines with different kernels based on non-intrusive monitoring data","authors":"T. Onoda, H. Murata, Gunnar Rätsch, K. Muller","doi":"10.1109/IJCNN.2002.1007480","DOIUrl":"https://doi.org/10.1109/IJCNN.2002.1007480","url":null,"abstract":"The estimation of the states of household electric appliances has served as the first application of support vector machines in the power system research field. Thus, it is imperative for power system research field to evaluate the support vector machine on this task from a practical point of view. We use the data proposed in Onoda and Ratsch (2000) for this purpose. We put particular emphasis on comparing different types of support vector machines obtained by choosing different kernels. We report results for polynomial kernels, radial basis function kernels, and sigmoid kernels. In the estimation of the states of household electric appliances, the results for the three different kernels achieved different error rates. We also put particular emphasis on comparing the different capacity of support vector machines obtained by choosing different regularization constants and parameters of kernels. The results show that the choice of regularization constants and parameters of kernels is as important as the choice of kernel functions for real world applications.","PeriodicalId":382771,"journal":{"name":"Proceedings of the 2002 International Joint Conference on Neural Networks. IJCNN'02 (Cat. No.02CH37290)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133087513","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 20
Genetic evolution of neural networks that remember 记忆神经网络的遗传进化
J. Dávila
The GENDALC system has been previously used to evolve NN topologies for natural language tasks. This paper presents results on additional tasks that require remembering and processing of previous input patterns. These results indicate that GENDALC is particularly well suited for tasks that require remembering.
GENDALC系统以前被用于发展自然语言任务的神经网络拓扑。本文介绍了需要记忆和处理先前输入模式的附加任务的结果。这些结果表明GENDALC特别适合于需要记忆的任务。
{"title":"Genetic evolution of neural networks that remember","authors":"J. Dávila","doi":"10.1109/IJCNN.2002.1007656","DOIUrl":"https://doi.org/10.1109/IJCNN.2002.1007656","url":null,"abstract":"The GENDALC system has been previously used to evolve NN topologies for natural language tasks. This paper presents results on additional tasks that require remembering and processing of previous input patterns. These results indicate that GENDALC is particularly well suited for tasks that require remembering.","PeriodicalId":382771,"journal":{"name":"Proceedings of the 2002 International Joint Conference on Neural Networks. IJCNN'02 (Cat. No.02CH37290)","volume":"109 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123163672","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Motivation for a genetically-trained topography-preserving map 制作遗传训练的地形保存地图的动机
J. S. Kirk, J. Zurada
It is often observed that the lattice of a well-trained self-organizing map (SOM) preserves the topology of the data set. In this paper, we examine what is meant by this claim and discuss a related goal for a dimension-reducing mapping. We term this goal "topography preservation", and attempt to fulfill it using a two-stage training method called genetically-trained topographic mapping. In the first stage of training, a clustering algorithm is used to map sets of input data points to each neuron. In the second stage, a genetic algorithm assigns adjacencies between the neurons of the output lattice according to the fitness defined by the topography preservation goal. Stock market data and an artificial data set are used to illustrate the relative strengths of the standard SOM and the new algorithm.
我们经常观察到,训练良好的自组织映射(SOM)的晶格保留了数据集的拓扑结构。在本文中,我们研究了这一说法的含义,并讨论了降维映射的相关目标。我们将这一目标称为“地形保存”,并尝试使用一种称为遗传训练地形制图的两阶段训练方法来实现这一目标。在训练的第一阶段,使用聚类算法将输入数据点集映射到每个神经元。在第二阶段,遗传算法根据地形保存目标定义的适应度分配输出格神经元之间的邻接关系。用股票市场数据和一个人工数据集来说明标准SOM和新算法的相对优势。
{"title":"Motivation for a genetically-trained topography-preserving map","authors":"J. S. Kirk, J. Zurada","doi":"10.1109/IJCNN.2002.1005504","DOIUrl":"https://doi.org/10.1109/IJCNN.2002.1005504","url":null,"abstract":"It is often observed that the lattice of a well-trained self-organizing map (SOM) preserves the topology of the data set. In this paper, we examine what is meant by this claim and discuss a related goal for a dimension-reducing mapping. We term this goal \"topography preservation\", and attempt to fulfill it using a two-stage training method called genetically-trained topographic mapping. In the first stage of training, a clustering algorithm is used to map sets of input data points to each neuron. In the second stage, a genetic algorithm assigns adjacencies between the neurons of the output lattice according to the fitness defined by the topography preservation goal. Stock market data and an artificial data set are used to illustrate the relative strengths of the standard SOM and the new algorithm.","PeriodicalId":382771,"journal":{"name":"Proceedings of the 2002 International Joint Conference on Neural Networks. IJCNN'02 (Cat. No.02CH37290)","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131309544","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
A multi-level and multi-scale evolutionary modeling system for scientific data 多层次、多尺度的科学数据演化建模系统
Zhou Kang, Yan Li, H. de Garis, Lishan Kang
The discovery of scientific laws is always built on the basis of scientific experiments and observed data. Any real world complex system must be controlled by some basic laws, including macroscopic level, submicroscopic level and microscopic level laws. How to discover its necessity-laws from these observed data is the most important task of data mining (DM) and KDD. Based on the evolutionary computation, this paper proposes a multilevel and multi-scale evolutionary modeling system which models the macro-behavior of the system by ordinary differential equations while models the micro-behavior of the system by natural fractals. This system can be used to model and predict the scientific observed time series, such as observed data of sunspot and precipitation of flood season, and always get good results.
科学规律的发现总是建立在科学实验和观察数据的基础上。任何现实世界的复杂系统都必须遵循一些基本规律,包括宏观规律、亚微观规律和微观规律。如何从这些观测数据中发现其必然规律是数据挖掘和知识发现的重要任务。在进化计算的基础上,提出了一个多层次、多尺度的进化建模系统,用常微分方程对系统的宏观行为进行建模,用自然分形对系统的微观行为进行建模。该系统可用于对太阳黑子观测资料、汛期降水等科学观测时间序列进行建模和预测,并取得了较好的结果。
{"title":"A multi-level and multi-scale evolutionary modeling system for scientific data","authors":"Zhou Kang, Yan Li, H. de Garis, Lishan Kang","doi":"10.1109/IJCNN.2002.1005565","DOIUrl":"https://doi.org/10.1109/IJCNN.2002.1005565","url":null,"abstract":"The discovery of scientific laws is always built on the basis of scientific experiments and observed data. Any real world complex system must be controlled by some basic laws, including macroscopic level, submicroscopic level and microscopic level laws. How to discover its necessity-laws from these observed data is the most important task of data mining (DM) and KDD. Based on the evolutionary computation, this paper proposes a multilevel and multi-scale evolutionary modeling system which models the macro-behavior of the system by ordinary differential equations while models the micro-behavior of the system by natural fractals. This system can be used to model and predict the scientific observed time series, such as observed data of sunspot and precipitation of flood season, and always get good results.","PeriodicalId":382771,"journal":{"name":"Proceedings of the 2002 International Joint Conference on Neural Networks. IJCNN'02 (Cat. No.02CH37290)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129073788","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Proceedings of the 2002 International Joint Conference on Neural Networks. IJCNN'02 (Cat. No.02CH37290)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1