首页 > 最新文献

IJCNN'99. International Joint Conference on Neural Networks. Proceedings (Cat. No.99CH36339)最新文献

英文 中文
Analysis and prediction of cranberry growth with dynamical neural network models 动态神经网络模型对蔓越莓生长的分析与预测
C. H. Chen, Bichuan Shen
Cranberry plants are very sensitive to weather and other conditions. In this paper, the condition of cranberry growth is analyzed through PCA (principle component analysis) of the minimum cranberry spectral match measurement data. Three neural network models are applied to the one-month ahead prediction. The simulation results show the high performance modeling ability of these neural networks. The reliable prediction provided by the dynamic neural networks will be useful for the farmers to monitor and control the cranberry growth process.
蔓越莓对天气和其他条件非常敏感。本文通过对蔓越莓光谱最小匹配测量数据进行主成分分析,分析蔓越莓生长状况。将三个神经网络模型应用于一个月前的预测。仿真结果表明,这些神经网络具有良好的建模能力。动态神经网络提供的可靠预测将有助于农民监测和控制蔓越莓的生长过程。
{"title":"Analysis and prediction of cranberry growth with dynamical neural network models","authors":"C. H. Chen, Bichuan Shen","doi":"10.1109/IJCNN.1999.836208","DOIUrl":"https://doi.org/10.1109/IJCNN.1999.836208","url":null,"abstract":"Cranberry plants are very sensitive to weather and other conditions. In this paper, the condition of cranberry growth is analyzed through PCA (principle component analysis) of the minimum cranberry spectral match measurement data. Three neural network models are applied to the one-month ahead prediction. The simulation results show the high performance modeling ability of these neural networks. The reliable prediction provided by the dynamic neural networks will be useful for the farmers to monitor and control the cranberry growth process.","PeriodicalId":157719,"journal":{"name":"IJCNN'99. International Joint Conference on Neural Networks. Proceedings (Cat. No.99CH36339)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115234532","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
On the conditions of outer-supervised feedforward neural networks for null cost learning 外监督前馈神经网络零代价学习的条件
De-shuang Huang
This paper investigates, from the viewpoint of linear algebra, the local minima of least square error cost functions defined at the outputs of outer-supervised feedforward neural networks (FNN). For a specific case, we also show that those spacedly colinear samples (probably output by the final hidden layer) will be easily separated with null-cost error function even if the condition M/spl ges/N is not satisfied. In the light of these conclusions we shall give a general method for designing a suitable architecture network to solve a specific problem.
本文从线性代数的角度研究了外监督前馈神经网络(FNN)输出处最小二乘误差代价函数的局部极小值。对于一个具体的例子,我们也证明了即使条件M/spl ges/N不满足,那些间隔共线性的样本(可能由最终隐藏层输出)也很容易与零代价误差函数分离。根据这些结论,我们将给出一种设计合适的体系结构网络来解决具体问题的一般方法。
{"title":"On the conditions of outer-supervised feedforward neural networks for null cost learning","authors":"De-shuang Huang","doi":"10.1109/IJCNN.1999.831061","DOIUrl":"https://doi.org/10.1109/IJCNN.1999.831061","url":null,"abstract":"This paper investigates, from the viewpoint of linear algebra, the local minima of least square error cost functions defined at the outputs of outer-supervised feedforward neural networks (FNN). For a specific case, we also show that those spacedly colinear samples (probably output by the final hidden layer) will be easily separated with null-cost error function even if the condition M/spl ges/N is not satisfied. In the light of these conclusions we shall give a general method for designing a suitable architecture network to solve a specific problem.","PeriodicalId":157719,"journal":{"name":"IJCNN'99. International Joint Conference on Neural Networks. Proceedings (Cat. No.99CH36339)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115413075","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Design and analysis of neural networks for systems optimization 用于系统优化的神经网络设计与分析
I. Silva, M. E. Bordon, A. Souza
Artificial neural networks are dynamic systems consisting of highly interconnected and parallel nonlinear processing elements that are shown to be extremely effective in computation. This paper presents an architecture of artificial neural networks that can be used to solve several classes of optimization problems. More specifically, a modified Hopfield network is developed and its internal parameters are computed using the valid-subspace technique. Among the problems that can be treated by the proposed approach include combinational optimization problems and dynamic programming problems.
人工神经网络是由高度互联和并行的非线性处理元素组成的动态系统,在计算方面表现出极高的效率。本文提出了一种人工神经网络的体系结构,可用于解决几类优化问题。具体地说,提出了一种改进的Hopfield网络,并利用有效子空间技术计算了其内部参数。该方法可处理的问题包括组合优化问题和动态规划问题。
{"title":"Design and analysis of neural networks for systems optimization","authors":"I. Silva, M. E. Bordon, A. Souza","doi":"10.1109/IJCNN.1999.831583","DOIUrl":"https://doi.org/10.1109/IJCNN.1999.831583","url":null,"abstract":"Artificial neural networks are dynamic systems consisting of highly interconnected and parallel nonlinear processing elements that are shown to be extremely effective in computation. This paper presents an architecture of artificial neural networks that can be used to solve several classes of optimization problems. More specifically, a modified Hopfield network is developed and its internal parameters are computed using the valid-subspace technique. Among the problems that can be treated by the proposed approach include combinational optimization problems and dynamic programming problems.","PeriodicalId":157719,"journal":{"name":"IJCNN'99. International Joint Conference on Neural Networks. Proceedings (Cat. No.99CH36339)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115430109","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Neural networks for consciousness: the central representation 意识的神经网络:中心表征
John G. Taylor
A framework is developed, and criteria thereby deduced, for a neural site to be regarded as essential for the creation of consciousness. Various sites in the brain are considered but only very few are found to satisfy all of the criteria. The framework proposed here is barred on the notion of the central representation regarded as being composed of information deemed intrinsic to awareness. In particular, the central representation is suggested as being in the inferior parietal lobes. Implications of this identification are discussed.
一个框架被开发出来,标准被推导出来,一个神经部位被认为是意识产生的必要条件。人们考虑了大脑中的不同部位,但只有很少的部位能满足所有的标准。这里提出的框架禁止了被认为是由被认为是意识固有的信息组成的中心表征的概念。特别是,中央表征被认为是在下顶叶。讨论了这种识别的含义。
{"title":"Neural networks for consciousness: the central representation","authors":"John G. Taylor","doi":"10.1109/IJCNN.1999.831462","DOIUrl":"https://doi.org/10.1109/IJCNN.1999.831462","url":null,"abstract":"A framework is developed, and criteria thereby deduced, for a neural site to be regarded as essential for the creation of consciousness. Various sites in the brain are considered but only very few are found to satisfy all of the criteria. The framework proposed here is barred on the notion of the central representation regarded as being composed of information deemed intrinsic to awareness. In particular, the central representation is suggested as being in the inferior parietal lobes. Implications of this identification are discussed.","PeriodicalId":157719,"journal":{"name":"IJCNN'99. International Joint Conference on Neural Networks. Proceedings (Cat. No.99CH36339)","volume":"80 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115702196","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Self-trapping in an attractor neural network with nearest neighbor synapses mimics full connectivity 具有最近邻突触的吸引子神经网络中的自捕获模拟了完全连接
R. Pavloski, M. Karimi
A means of providing the feedback necessary for an associative memory is suggested by self-trapping, the development of localization phenomena and order in coupled physical systems. Following the lead of Hopfield (1982, 1984) who exploited the formal analogy of a fully-connected ANN to an infinite ranged interaction Ising model, we have carried through a similar development to demonstrate that self-trapping networks (STNs) with only near-neighbor synapses develop attractor states through localization of a self-trapping input. The attractor states of the STN are the stored memories of this system, and are analogous to the magnetization developed in a self-trapping 1D Ising system. Post-synaptic potentials for each stored memory become trapped at non-zero valves and a sparsely-connected network evolves to the corresponding state. Both analytic and computational studies of the STN show that this model mimics a fully-connected ANN.
提出了一种提供联想记忆所必需的反馈的方法,即自捕获,耦合物理系统中定位现象和顺序的发展。Hopfield(1982, 1984)利用了全连接人工神经网络与无限范围相互作用的Ising模型的形式化类比,在此基础上,我们进行了类似的发展,证明只有近邻突触的自捕获网络(stn)通过定位自捕获输入来发展吸引子状态。STN的吸引子状态是该系统的存储记忆,并且类似于自捕获1D Ising系统中开发的磁化。每个存储记忆的突触后电位被困在非零阀,一个稀疏连接的网络进化到相应的状态。对STN的分析和计算研究表明,该模型模拟了一个全连接的神经网络。
{"title":"Self-trapping in an attractor neural network with nearest neighbor synapses mimics full connectivity","authors":"R. Pavloski, M. Karimi","doi":"10.1109/IJCNN.1999.831586","DOIUrl":"https://doi.org/10.1109/IJCNN.1999.831586","url":null,"abstract":"A means of providing the feedback necessary for an associative memory is suggested by self-trapping, the development of localization phenomena and order in coupled physical systems. Following the lead of Hopfield (1982, 1984) who exploited the formal analogy of a fully-connected ANN to an infinite ranged interaction Ising model, we have carried through a similar development to demonstrate that self-trapping networks (STNs) with only near-neighbor synapses develop attractor states through localization of a self-trapping input. The attractor states of the STN are the stored memories of this system, and are analogous to the magnetization developed in a self-trapping 1D Ising system. Post-synaptic potentials for each stored memory become trapped at non-zero valves and a sparsely-connected network evolves to the corresponding state. Both analytic and computational studies of the STN show that this model mimics a fully-connected ANN.","PeriodicalId":157719,"journal":{"name":"IJCNN'99. International Joint Conference on Neural Networks. Proceedings (Cat. No.99CH36339)","volume":"102 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115745958","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Generation of explicit knowledge from empirical data through pruning of trainable neural networks 通过修剪可训练的神经网络,从经验数据中生成显式知识
Alexander N Gorban, E. M. Mirkes, V. G. Tsaregorodtsev
This paper presents a generalized technology of extraction of explicit knowledge from data. The main ideas are: 1) maximal reduction of network complexity (not only removal of neurons or synapses, but removal all the unnecessary elements and signals and reduction of the complexity of elements); 2) using of adjustable and flexible pruning process (the user should have a possibility to prune network on his own way in order to achieve a desired network structure for the purpose of extraction of rules of desired type and form); and 3) extraction of rules not in predetermined but any desired form. Some considerations and notes about network architecture and training process and applicability of currently developed pruning techniques and rule extraction algorithms are discussed. This technology, being developed by us for more than 10 years, allowed us to create dozens of knowledge-based expert systems.
提出了一种从数据中提取显式知识的通用技术。主要思想是:1)最大限度地降低网络复杂性(不仅去除神经元或突触,而且去除所有不必要的元素和信号,降低元素的复杂性);2)采用可调节、灵活的剪枝过程(用户应能够按照自己的方式对网络进行剪枝,以获得想要的网络结构,从而提取想要的类型和形式的规则);3)抽取规则,不是预先确定的,而是任何期望的形式。讨论了当前发展的剪枝技术和规则提取算法在网络结构和训练过程中的一些考虑和注意事项。这项技术由我们开发了十多年,使我们能够创建数十个基于知识的专家系统。
{"title":"Generation of explicit knowledge from empirical data through pruning of trainable neural networks","authors":"Alexander N Gorban, E. M. Mirkes, V. G. Tsaregorodtsev","doi":"10.1109/IJCNN.1999.830876","DOIUrl":"https://doi.org/10.1109/IJCNN.1999.830876","url":null,"abstract":"This paper presents a generalized technology of extraction of explicit knowledge from data. The main ideas are: 1) maximal reduction of network complexity (not only removal of neurons or synapses, but removal all the unnecessary elements and signals and reduction of the complexity of elements); 2) using of adjustable and flexible pruning process (the user should have a possibility to prune network on his own way in order to achieve a desired network structure for the purpose of extraction of rules of desired type and form); and 3) extraction of rules not in predetermined but any desired form. Some considerations and notes about network architecture and training process and applicability of currently developed pruning techniques and rule extraction algorithms are discussed. This technology, being developed by us for more than 10 years, allowed us to create dozens of knowledge-based expert systems.","PeriodicalId":157719,"journal":{"name":"IJCNN'99. International Joint Conference on Neural Networks. Proceedings (Cat. No.99CH36339)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116658130","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
The /spl alpha/-EM learning and its cookbook: from mixture-of-expert neural networks to movie random field -EM学习及其食谱:从混合专家神经网络到电影随机场
Y. Matsuyama, T. Ikeda, Tomoaki Tanaka, S. Furukawa, N. Takeda, Takeshi Niimoto
The /spl alpha/-EM algorithm is a proper extension of the traditional log-EM algorithm. This new algorithm is based on the /spl alpha/-logarithm, while the traditional one uses the logarithm. The case of /spl alpha/=-1 corresponds to the log-EM algorithm. Since the speed of the /spl alpha/-EM algorithm was reported for learning problems, this paper shows that closed-form E-steps can be obtained for a wide class of problems. There is a set of common techniques. That is, a cookbooks for the /spl alpha/-EM algorithm is presented. The recipes include unsupervised neural networks, supervised neural networks for various gating, hidden Markov models and Markov random fields for moving object segmentation. Reasoning for the speedup is also given.
/spl alpha/-EM算法是传统对数-EM算法的适当扩展。该算法基于/spl alpha/-对数,而传统算法使用对数。/spl alpha/=-1的情况对应于log-EM算法。由于/spl alpha/-EM算法对于学习问题的速度已被报道,因此本文表明,对于一类广泛的问题,可以获得封闭形式的e -步。这里有一组常用的技术。也就是说,给出了一个/spl alpha/-EM算法的菜谱。这些方法包括无监督神经网络、用于各种门控的监督神经网络、用于运动目标分割的隐马尔可夫模型和马尔可夫随机场。同时给出了加速的原因。
{"title":"The /spl alpha/-EM learning and its cookbook: from mixture-of-expert neural networks to movie random field","authors":"Y. Matsuyama, T. Ikeda, Tomoaki Tanaka, S. Furukawa, N. Takeda, Takeshi Niimoto","doi":"10.1109/IJCNN.1999.831162","DOIUrl":"https://doi.org/10.1109/IJCNN.1999.831162","url":null,"abstract":"The /spl alpha/-EM algorithm is a proper extension of the traditional log-EM algorithm. This new algorithm is based on the /spl alpha/-logarithm, while the traditional one uses the logarithm. The case of /spl alpha/=-1 corresponds to the log-EM algorithm. Since the speed of the /spl alpha/-EM algorithm was reported for learning problems, this paper shows that closed-form E-steps can be obtained for a wide class of problems. There is a set of common techniques. That is, a cookbooks for the /spl alpha/-EM algorithm is presented. The recipes include unsupervised neural networks, supervised neural networks for various gating, hidden Markov models and Markov random fields for moving object segmentation. Reasoning for the speedup is also given.","PeriodicalId":157719,"journal":{"name":"IJCNN'99. International Joint Conference on Neural Networks. Proceedings (Cat. No.99CH36339)","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116984822","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Time topology for the self-organizing map 自组织映射的时间拓扑
P. Somervuo
Time information of the input data is used for evaluating the goodness of the self-organizing map to store and represent temporal feature vector sequences. A new node neighborhood is defined for the map which takes the temporal order of the input samples into account. A connection is created between those two map modes which are the best-matching units for two successive input samples in time. This results in the time-topology preserving network.
利用输入数据的时间信息评价自组织映射的优劣,存储和表示时间特征向量序列。考虑到输入样本的时间顺序,为映射定义了一个新的节点邻域。在这两个映射模式之间建立连接,这两个映射模式是两个连续输入样本在时间上的最佳匹配单元。这就产生了保持时间拓扑的网络。
{"title":"Time topology for the self-organizing map","authors":"P. Somervuo","doi":"10.1109/IJCNN.1999.832671","DOIUrl":"https://doi.org/10.1109/IJCNN.1999.832671","url":null,"abstract":"Time information of the input data is used for evaluating the goodness of the self-organizing map to store and represent temporal feature vector sequences. A new node neighborhood is defined for the map which takes the temporal order of the input samples into account. A connection is created between those two map modes which are the best-matching units for two successive input samples in time. This results in the time-topology preserving network.","PeriodicalId":157719,"journal":{"name":"IJCNN'99. International Joint Conference on Neural Networks. Proceedings (Cat. No.99CH36339)","volume":" 17","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120943266","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Approximation of a function and its derivatives in feedforward neural networks 前馈神经网络中函数及其导数的逼近
E. Basson, A. Engelbrecht
A new learning algorithm is presented that learns a function and its first-order derivatives. Derivatives are learned together with the function using gradient descent. Preliminary results show that the algorithm accurately approximates the derivatives.
提出了一种新的学习函数及其一阶导数的算法。利用梯度下降法与函数一起学习导数。初步结果表明,该算法能准确逼近导数。
{"title":"Approximation of a function and its derivatives in feedforward neural networks","authors":"E. Basson, A. Engelbrecht","doi":"10.1109/IJCNN.1999.831531","DOIUrl":"https://doi.org/10.1109/IJCNN.1999.831531","url":null,"abstract":"A new learning algorithm is presented that learns a function and its first-order derivatives. Derivatives are learned together with the function using gradient descent. Preliminary results show that the algorithm accurately approximates the derivatives.","PeriodicalId":157719,"journal":{"name":"IJCNN'99. International Joint Conference on Neural Networks. Proceedings (Cat. No.99CH36339)","volume":"19 2","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120993190","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
A neural network endowed with symbolic processing ability 具有符号处理能力的神经网络
D. Vogiatzis, A. Stafylopatis
We propose a neural network method for the generation of symbolic expressions using reinforcement learning. According to the proposed method, a human decides on the kind and number of primitive functions which, with the appropriate composition (in the mathematical sense), can represent a mapping between two domains. The appropriate composition is achieved by an agent which tries many compositions and receives a reward depending on the quality of the composed function.
我们提出了一种使用强化学习生成符号表达式的神经网络方法。根据所提出的方法,人类决定原始函数的种类和数量,通过适当的组合(在数学意义上),可以表示两个域之间的映射。通过尝试许多组合并根据组合函数的质量获得奖励的代理来实现适当的组合。
{"title":"A neural network endowed with symbolic processing ability","authors":"D. Vogiatzis, A. Stafylopatis","doi":"10.1109/IJCNN.1999.830809","DOIUrl":"https://doi.org/10.1109/IJCNN.1999.830809","url":null,"abstract":"We propose a neural network method for the generation of symbolic expressions using reinforcement learning. According to the proposed method, a human decides on the kind and number of primitive functions which, with the appropriate composition (in the mathematical sense), can represent a mapping between two domains. The appropriate composition is achieved by an agent which tries many compositions and receives a reward depending on the quality of the composed function.","PeriodicalId":157719,"journal":{"name":"IJCNN'99. International Joint Conference on Neural Networks. Proceedings (Cat. No.99CH36339)","volume":"82 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127486303","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IJCNN'99. International Joint Conference on Neural Networks. Proceedings (Cat. No.99CH36339)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1