首页 > 最新文献

IJCNN'99. International Joint Conference on Neural Networks. Proceedings (Cat. No.99CH36339)最新文献

英文 中文
Bayesian neural networks with correlating residuals 相关残差贝叶斯神经网络
Aki Vehtari, J. Lampinen
In a multivariate regression problem it is often assumed that residuals of outputs are independent of each other. In many applications a more realistic model would allow dependencies between the outputs. In this paper we show how a Bayesian treatment using the Markov chain Monte Carlo method can allow for a full covariance matrix with multilayer perceptron neural network.
在多元回归问题中,通常假设输出的残差彼此独立。在许多应用程序中,更现实的模型将允许输出之间的依赖关系。在本文中,我们展示了如何使用马尔可夫链蒙特卡罗方法的贝叶斯处理可以允许多层感知器神经网络的完整协方差矩阵。
{"title":"Bayesian neural networks with correlating residuals","authors":"Aki Vehtari, J. Lampinen","doi":"10.1109/IJCNN.1999.832623","DOIUrl":"https://doi.org/10.1109/IJCNN.1999.832623","url":null,"abstract":"In a multivariate regression problem it is often assumed that residuals of outputs are independent of each other. In many applications a more realistic model would allow dependencies between the outputs. In this paper we show how a Bayesian treatment using the Markov chain Monte Carlo method can allow for a full covariance matrix with multilayer perceptron neural network.","PeriodicalId":157719,"journal":{"name":"IJCNN'99. International Joint Conference on Neural Networks. Proceedings (Cat. No.99CH36339)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126464090","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
A new scheme for extracting multi-temporal sequence patterns 一种新的多时间序列模式提取方法
P. Hong, S. Ray, Thomas Huang
This paper proposes a new scheme for unsupervised multi-temporal sequence pattern extraction. The main idea of the scheme is iterative coarse to fine data examination. We decompose a pattern into ambiguous subpatterns and distinguishable sub-patterns (DSP). In each iteration, we coarsely examine the training temporal signal sequence by training an Elman neural network. The trained Elman network is used to select the DSP candidate set. Then, we look at the training signals around the DSPs and use maximum likelihood criteria to expand them into whole patterns. We cut out the new found patterns from the training signal sequence and repeat the whole procedure until no more new patterns are found. The experimental result shows this method promising.
提出了一种新的无监督多时间序列模式提取方案。该方案的主要思想是迭代的粗到精数据检验。我们将一个模式分解为模糊子模式和可分辨子模式(DSP)。在每次迭代中,我们通过训练Elman神经网络对训练时间信号序列进行粗检验。利用训练好的Elman网络选择DSP候选集。然后,我们查看dsp周围的训练信号,并使用最大似然标准将其扩展为整个模式。我们从训练信号序列中剔除新发现的模式,并重复整个过程,直到没有发现新的模式。实验结果表明,该方法是可行的。
{"title":"A new scheme for extracting multi-temporal sequence patterns","authors":"P. Hong, S. Ray, Thomas Huang","doi":"10.1109/IJCNN.1999.833494","DOIUrl":"https://doi.org/10.1109/IJCNN.1999.833494","url":null,"abstract":"This paper proposes a new scheme for unsupervised multi-temporal sequence pattern extraction. The main idea of the scheme is iterative coarse to fine data examination. We decompose a pattern into ambiguous subpatterns and distinguishable sub-patterns (DSP). In each iteration, we coarsely examine the training temporal signal sequence by training an Elman neural network. The trained Elman network is used to select the DSP candidate set. Then, we look at the training signals around the DSPs and use maximum likelihood criteria to expand them into whole patterns. We cut out the new found patterns from the training signal sequence and repeat the whole procedure until no more new patterns are found. The experimental result shows this method promising.","PeriodicalId":157719,"journal":{"name":"IJCNN'99. International Joint Conference on Neural Networks. Proceedings (Cat. No.99CH36339)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126484123","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Knowledge matching model with dynamic weights based on the primary visual cortex 基于初级视觉皮层的动态权重知识匹配模型
Yuan Bo, Liming Zhang
Even in the field of biology, the principle of how knowledge can be efficiently utilized in a neural network has not been solved perfectly. This paper discusses a new model based on the structure of the V1 area in biological visual system for knowledge matching. The contour of the object is stored as knowledge in the form of chain code. During the matching process, the chain code is presented to control the dynamics of the neuron in a V1-similar structure neural network. Cooperating with the dynamic weights, active neurons will reconstruct the object's contour at the place where the object is in the visual field. This model is an exploration of how knowledge is represented and utilized in the brain.
即使在生物学领域,如何在神经网络中有效地利用知识的原理也没有得到很好的解决。本文讨论了一种基于生物视觉系统V1区域结构的知识匹配新模型。物体的轮廓以链码的形式作为知识存储。在匹配过程中,采用链码控制v1 -相似结构神经网络中神经元的动态特性。活动神经元配合动态权值,在物体在视野中的位置重建物体的轮廓。这个模型是对知识如何在大脑中表现和利用的探索。
{"title":"Knowledge matching model with dynamic weights based on the primary visual cortex","authors":"Yuan Bo, Liming Zhang","doi":"10.1109/IJCNN.1999.831477","DOIUrl":"https://doi.org/10.1109/IJCNN.1999.831477","url":null,"abstract":"Even in the field of biology, the principle of how knowledge can be efficiently utilized in a neural network has not been solved perfectly. This paper discusses a new model based on the structure of the V1 area in biological visual system for knowledge matching. The contour of the object is stored as knowledge in the form of chain code. During the matching process, the chain code is presented to control the dynamics of the neuron in a V1-similar structure neural network. Cooperating with the dynamic weights, active neurons will reconstruct the object's contour at the place where the object is in the visual field. This model is an exploration of how knowledge is represented and utilized in the brain.","PeriodicalId":157719,"journal":{"name":"IJCNN'99. International Joint Conference on Neural Networks. Proceedings (Cat. No.99CH36339)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128077829","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A gain perturbation method to improve the generalization performance for the recurrent neural network misfire detector 一种提高递归神经网络失火检测器泛化性能的增益摄动方法
Pu Sun, K. Marko
A common constraint on the application of neural networks to diagnostics and control of mass manufactured systems is that training sets can only be obtained from limited number of system exemplars. As a consequence the variations of dynamic response in the systems pose a problem in obtaining excellent performance for the trained neural networks. In this paper we describe a gain perturbation method (GPM) to improve the generalization performance in neural network diagnostic monitors trained on a data set obtained from one individual vehicle and rested on data from the another vehicle. The results show significant improvement in the generalization performance for neural networks trained with GPM over the ones trained without GPM.
将神经网络应用于大规模制造系统的诊断和控制的一个常见限制是,训练集只能从有限数量的系统样本中获得。因此,系统动态响应的变化给训练后的神经网络获得良好的性能带来了困难。在本文中,我们描述了一种增益摄动方法(GPM)来提高神经网络诊断监视器的泛化性能,该神经网络诊断监视器是在一辆车的数据集上训练的,并依赖于另一辆车的数据集。结果表明,与不使用GPM训练的神经网络相比,使用GPM训练的神经网络的泛化性能有显著提高。
{"title":"A gain perturbation method to improve the generalization performance for the recurrent neural network misfire detector","authors":"Pu Sun, K. Marko","doi":"10.1109/IJCNN.1999.832599","DOIUrl":"https://doi.org/10.1109/IJCNN.1999.832599","url":null,"abstract":"A common constraint on the application of neural networks to diagnostics and control of mass manufactured systems is that training sets can only be obtained from limited number of system exemplars. As a consequence the variations of dynamic response in the systems pose a problem in obtaining excellent performance for the trained neural networks. In this paper we describe a gain perturbation method (GPM) to improve the generalization performance in neural network diagnostic monitors trained on a data set obtained from one individual vehicle and rested on data from the another vehicle. The results show significant improvement in the generalization performance for neural networks trained with GPM over the ones trained without GPM.","PeriodicalId":157719,"journal":{"name":"IJCNN'99. International Joint Conference on Neural Networks. Proceedings (Cat. No.99CH36339)","volume":"64 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126014633","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Priority ordered architecture of neural networks 神经网络的优先级排序结构
Wang Shoujue, Lu Huaxiang, C. Xiangdong, Li Yujian
In the architecture introduced, outputs of neurons (or neural nets) have different priorities, beside the differences in topological position and value of these outputs. We discuss how priority ordered neural networks (PONNs) have similarity to knowledge representation in the human brain. Also a general mathematical description of the PONN is introduced. The priority ordered single layer perceptron (POSLP) and the priority ordered radial basis function nets (PORBFN) for pattern classification are analyzed. The experiment shows that the learning speed of the POSLP and PORBFN are much faster than that of the multilayered feedforward neural networks with existing BP algorithms.
在介绍的体系结构中,除了这些输出的拓扑位置和值不同之外,神经元(或神经网络)的输出具有不同的优先级。我们讨论了优先顺序神经网络(ponn)如何与人脑中的知识表示具有相似性。同时介绍了PONN的一般数学描述。分析了优先顺序单层感知器(POSLP)和优先顺序径向基函数网(PORBFN)的模式分类方法。实验表明,POSLP和PORBFN的学习速度比现有BP算法的多层前馈神经网络要快得多。
{"title":"Priority ordered architecture of neural networks","authors":"Wang Shoujue, Lu Huaxiang, C. Xiangdong, Li Yujian","doi":"10.1109/IJCNN.1999.831054","DOIUrl":"https://doi.org/10.1109/IJCNN.1999.831054","url":null,"abstract":"In the architecture introduced, outputs of neurons (or neural nets) have different priorities, beside the differences in topological position and value of these outputs. We discuss how priority ordered neural networks (PONNs) have similarity to knowledge representation in the human brain. Also a general mathematical description of the PONN is introduced. The priority ordered single layer perceptron (POSLP) and the priority ordered radial basis function nets (PORBFN) for pattern classification are analyzed. The experiment shows that the learning speed of the POSLP and PORBFN are much faster than that of the multilayered feedforward neural networks with existing BP algorithms.","PeriodicalId":157719,"journal":{"name":"IJCNN'99. International Joint Conference on Neural Networks. Proceedings (Cat. No.99CH36339)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115911252","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
The optimal value of self-connection 自连接的最佳值
D. Gorodnichy
The fact that reducing self-connections improves the performance of the autoassociative networks built by the pseudo-inverse learning rule is known already for quite a while, but has not been studied in detail yet. In particular, it is known that decreasing of self-connection increases the direct attraction radius of the network, and it is also known that it increases the number of spurious dynamic attractors. Thus, it has been concluded that the optimal value of the coefficient of self-connection reduction D lies somewhere in the range (0; 0.5). This paper gives an explicit answer to the question on what is the optimal value of the self-connection reduction. It shows how the indirect attraction radius increases with the decrease of D. The summary of the results pertaining to the phenomenon is presented.
减少自连接可以提高由伪逆学习规则构建的自联想网络的性能,这一事实早已为人所知,但尚未有详细的研究。特别是,已知自连接的减少增加了网络的直接吸引半径,并且还知道它增加了虚假动态吸引子的数量。由此得出自连接缩减系数D的最优值在(0;0.5)。本文明确地回答了自连接约简的最优值是什么。给出了间接吸引半径随d的减小而增大的规律,并对有关这一现象的结果进行了总结。
{"title":"The optimal value of self-connection","authors":"D. Gorodnichy","doi":"10.1109/IJCNN.1999.831579","DOIUrl":"https://doi.org/10.1109/IJCNN.1999.831579","url":null,"abstract":"The fact that reducing self-connections improves the performance of the autoassociative networks built by the pseudo-inverse learning rule is known already for quite a while, but has not been studied in detail yet. In particular, it is known that decreasing of self-connection increases the direct attraction radius of the network, and it is also known that it increases the number of spurious dynamic attractors. Thus, it has been concluded that the optimal value of the coefficient of self-connection reduction D lies somewhere in the range (0; 0.5). This paper gives an explicit answer to the question on what is the optimal value of the self-connection reduction. It shows how the indirect attraction radius increases with the decrease of D. The summary of the results pertaining to the phenomenon is presented.","PeriodicalId":157719,"journal":{"name":"IJCNN'99. International Joint Conference on Neural Networks. Proceedings (Cat. No.99CH36339)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130242241","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
Quadrant-distance graphs: a method for visualizing neural network weight spaces 象限距离图:一种可视化神经网络权重空间的方法
B. Linnell
One of the major drawbacks to neural networks is the inability of the user to understand what is happening inside the network. Quadrant-distance (QD) graphs allow the user to graphically display a network's weight vector at any point in training, for networks of any size. This allows the user to quickly and easily identify similarities or differences between solution sets. QD graphs may also be used for a variety of other analysis functions, such as comparing initial weights to final weights, and observing the path of the network as it finds a solution.
神经网络的主要缺点之一是用户无法理解网络内部发生的事情。对于任何大小的网络,象限距离(QD)图允许用户在训练的任何点以图形方式显示网络的权重向量。这允许用户快速轻松地识别解决方案集之间的相似性或差异性。QD图还可以用于各种其他分析功能,例如比较初始权重和最终权重,以及在网络找到解决方案时观察网络的路径。
{"title":"Quadrant-distance graphs: a method for visualizing neural network weight spaces","authors":"B. Linnell","doi":"10.1109/IJCNN.1999.832624","DOIUrl":"https://doi.org/10.1109/IJCNN.1999.832624","url":null,"abstract":"One of the major drawbacks to neural networks is the inability of the user to understand what is happening inside the network. Quadrant-distance (QD) graphs allow the user to graphically display a network's weight vector at any point in training, for networks of any size. This allows the user to quickly and easily identify similarities or differences between solution sets. QD graphs may also be used for a variety of other analysis functions, such as comparing initial weights to final weights, and observing the path of the network as it finds a solution.","PeriodicalId":157719,"journal":{"name":"IJCNN'99. International Joint Conference on Neural Networks. Proceedings (Cat. No.99CH36339)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130423635","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
A comparison of radial basis function networks and fuzzy neural logic networks for autonomous star recognition 径向基函数网络与模糊神经逻辑网络在自主星识别中的比较
J. Dickerson, J. Hong, Z. Cox, D. Bailey
Autonomous star recognition requires that many similar patterns must be distinguished from one another with a small training set. Since these systems are implemented on-board a spacecraft, the network needs to have low memory requirements and minimal computational complexity. Fast training speeds are also important since star sensor capabilities change over time. This paper compares two networks that meet these needs: radial basis function networks and neural logic networks. Neural logic networks performed much better than radial basis function networks in terms of recognition accuracy, memory needed, and training speed.
自主星识别要求必须用一个小的训练集来区分许多相似的模式。由于这些系统是在航天器上实现的,因此网络需要具有较低的内存要求和最小的计算复杂性。快速的训练速度也很重要,因为星敏感器的能力会随着时间的推移而变化。本文比较了满足这些需求的两种网络:径向基函数网络和神经逻辑网络。神经逻辑网络在识别精度、记忆需求和训练速度方面都优于径向基函数网络。
{"title":"A comparison of radial basis function networks and fuzzy neural logic networks for autonomous star recognition","authors":"J. Dickerson, J. Hong, Z. Cox, D. Bailey","doi":"10.1109/IJCNN.1999.836167","DOIUrl":"https://doi.org/10.1109/IJCNN.1999.836167","url":null,"abstract":"Autonomous star recognition requires that many similar patterns must be distinguished from one another with a small training set. Since these systems are implemented on-board a spacecraft, the network needs to have low memory requirements and minimal computational complexity. Fast training speeds are also important since star sensor capabilities change over time. This paper compares two networks that meet these needs: radial basis function networks and neural logic networks. Neural logic networks performed much better than radial basis function networks in terms of recognition accuracy, memory needed, and training speed.","PeriodicalId":157719,"journal":{"name":"IJCNN'99. International Joint Conference on Neural Networks. Proceedings (Cat. No.99CH36339)","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130473710","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Sequential learning for associative memory using Kohonen feature map 基于Kohonen特征映射的联想记忆顺序学习
Takeo Yamada, M. Hattori, Masayuki Morisawa, Hiroshi Ito
We propose a sequential learning algorithm for an associative memory based on Kohonen feature map. In order to store new information without retraining weights on previously learned information, weights fixed neurons and weights semi-fixed neurons are used in the proposed algorithm. Owing to the semi-fixed neurons, the associative memory becomes structurally robust. Moreover, it has the following features: 1) it is robust for noisy inputs; 2) it has high storage capacity; and 3) it casts deal with one-to-many associations.
提出了一种基于Kohonen特征映射的联想记忆序列学习算法。为了在不重新训练权值的情况下存储新信息,该算法采用了权值固定神经元和权值半固定神经元。由于半固定神经元的存在,联想记忆在结构上具有鲁棒性。此外,它还具有以下特点:1)对噪声输入具有鲁棒性;2)存储容量大;3)它处理一对多关联。
{"title":"Sequential learning for associative memory using Kohonen feature map","authors":"Takeo Yamada, M. Hattori, Masayuki Morisawa, Hiroshi Ito","doi":"10.1109/IJCNN.1999.832675","DOIUrl":"https://doi.org/10.1109/IJCNN.1999.832675","url":null,"abstract":"We propose a sequential learning algorithm for an associative memory based on Kohonen feature map. In order to store new information without retraining weights on previously learned information, weights fixed neurons and weights semi-fixed neurons are used in the proposed algorithm. Owing to the semi-fixed neurons, the associative memory becomes structurally robust. Moreover, it has the following features: 1) it is robust for noisy inputs; 2) it has high storage capacity; and 3) it casts deal with one-to-many associations.","PeriodicalId":157719,"journal":{"name":"IJCNN'99. International Joint Conference on Neural Networks. Proceedings (Cat. No.99CH36339)","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134232967","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 26
Hybrid fuzzy logic and neural network model for fingerprint minutiae extraction 指纹特征提取的混合模糊逻辑和神经网络模型
V. Sagar, J. Koh
This paper presents the research into the use of fuzzy-neuro technology in automated finger print recognition for the extraction of fingerprint features, known as minutiae. The work presented here is an addendum to work carried out earlier by Sagar et al. (1995).
本文介绍了在自动指纹识别中使用模糊神经技术提取指纹特征的研究。本文介绍的工作是Sagar等人(1995年)早期工作的附录。
{"title":"Hybrid fuzzy logic and neural network model for fingerprint minutiae extraction","authors":"V. Sagar, J. Koh","doi":"10.1109/IJCNN.1999.836178","DOIUrl":"https://doi.org/10.1109/IJCNN.1999.836178","url":null,"abstract":"This paper presents the research into the use of fuzzy-neuro technology in automated finger print recognition for the extraction of fingerprint features, known as minutiae. The work presented here is an addendum to work carried out earlier by Sagar et al. (1995).","PeriodicalId":157719,"journal":{"name":"IJCNN'99. International Joint Conference on Neural Networks. Proceedings (Cat. No.99CH36339)","volume":"88 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134440158","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 48
期刊
IJCNN'99. International Joint Conference on Neural Networks. Proceedings (Cat. No.99CH36339)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1