首页 > 最新文献

IJCNN'01. International Joint Conference on Neural Networks. Proceedings (Cat. No.01CH37222)最新文献

英文 中文
Chaotic analog associative memory 混沌模拟联想存储器
Hisao Imai, Y. Osana, M. Hagiwara
We propose a chaotic analog associative memory (CAAM). It can deal with associations of analog patterns including common patterns. The proposed model has the following features: (1) it can deal with associations of analog patterns; (2) it can deal with one-to-many associations; (3) it has robustness for noisy input and neuron damage.
我们提出了一种混沌模拟联想存储器(CAAM)。它可以处理模拟模式的关联,包括常见模式。该模型具有以下特点:(1)能够处理模拟模式的关联;(2)可以处理一对多关联;(3)对噪声输入和神经元损伤具有鲁棒性。
{"title":"Chaotic analog associative memory","authors":"Hisao Imai, Y. Osana, M. Hagiwara","doi":"10.1109/IJCNN.2001.939522","DOIUrl":"https://doi.org/10.1109/IJCNN.2001.939522","url":null,"abstract":"We propose a chaotic analog associative memory (CAAM). It can deal with associations of analog patterns including common patterns. The proposed model has the following features: (1) it can deal with associations of analog patterns; (2) it can deal with one-to-many associations; (3) it has robustness for noisy input and neuron damage.","PeriodicalId":346955,"journal":{"name":"IJCNN'01. International Joint Conference on Neural Networks. Proceedings (Cat. No.01CH37222)","volume":"108 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123285129","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Cluster-weighted modeling with multiclusters 具有多聚类的聚类加权建模
L. Feldkamp, D. Prokhorov, T. Feldkamp
Cluster-weighted modeling (CWM) was proposed by Gershenfeld (1999) for density estimation in joint input-output space. In the base CWM algorithm there is a single output cluster for each input cluster. We extend the base CWM to the structure in which multiple output clusters are associated with the same input cluster. We call this CWM with multiclusters and illustrate it with an example.
Gershenfeld(1999)提出了聚类加权模型(CWM)用于联合输入输出空间的密度估计。在基本CWM算法中,每个输入集群对应一个输出集群。我们将基本CWM扩展为这样的结构,其中多个输出集群与相同的输入集群相关联。我们将此称为具有多集群的CWM,并通过一个示例来说明它。
{"title":"Cluster-weighted modeling with multiclusters","authors":"L. Feldkamp, D. Prokhorov, T. Feldkamp","doi":"10.1109/IJCNN.2001.938419","DOIUrl":"https://doi.org/10.1109/IJCNN.2001.938419","url":null,"abstract":"Cluster-weighted modeling (CWM) was proposed by Gershenfeld (1999) for density estimation in joint input-output space. In the base CWM algorithm there is a single output cluster for each input cluster. We extend the base CWM to the structure in which multiple output clusters are associated with the same input cluster. We call this CWM with multiclusters and illustrate it with an example.","PeriodicalId":346955,"journal":{"name":"IJCNN'01. International Joint Conference on Neural Networks. Proceedings (Cat. No.01CH37222)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115409958","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
A new validation index for determining the number of clusters in a data set 一种新的验证索引,用于确定数据集中簇的数量
Hao-jun Sun, Shengrui Wang, Q. Jiang
Clustering analysis plays an important role in solving practical problems in such domains as data mining in large databases. In this paper, we are interested in fuzzy c-means (FCM) based algorithms. The main purpose is to design an effective validity function to measure the result of clustering and detecting the best number of clusters for a given data set in practical applications. After a review of the relevant literature, we present the new validity function. Experimental results and comparisons will be given to illustrate the performance of the new validity function.
聚类分析在解决大型数据库数据挖掘等领域的实际问题中发挥着重要作用。在本文中,我们感兴趣的是基于模糊c均值(FCM)的算法。主要目的是设计一个有效的有效性函数来衡量聚类结果,并在实际应用中检测给定数据集的最佳聚类数。在回顾了相关文献后,我们提出了新的效度函数。实验结果和对比说明了新有效性函数的性能。
{"title":"A new validation index for determining the number of clusters in a data set","authors":"Hao-jun Sun, Shengrui Wang, Q. Jiang","doi":"10.1109/IJCNN.2001.938445","DOIUrl":"https://doi.org/10.1109/IJCNN.2001.938445","url":null,"abstract":"Clustering analysis plays an important role in solving practical problems in such domains as data mining in large databases. In this paper, we are interested in fuzzy c-means (FCM) based algorithms. The main purpose is to design an effective validity function to measure the result of clustering and detecting the best number of clusters for a given data set in practical applications. After a review of the relevant literature, we present the new validity function. Experimental results and comparisons will be given to illustrate the performance of the new validity function.","PeriodicalId":346955,"journal":{"name":"IJCNN'01. International Joint Conference on Neural Networks. Proceedings (Cat. No.01CH37222)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123084534","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Performance analysis of neural CDMA multiuser detector 神经CDMA多用户检测器性能分析
Toshiyuki TANAKA
We analyze the performance of neural code-division multiple-access (CDMA) multiuser detectors. Formal correspondence between the CDMA multiuser detection problem and recurrent neural networks such as the Hopfield neural network and the Boltzmann machines is established, based on which the replica analysis on the bit-error rate of the neural multiuser detectors is presented. Detection dynamics of the neural multiuser detectors is also analyzed based on statistical neurodynamics.
分析了神经码分多址(CDMA)多用户检测器的性能。建立了CDMA多用户检测问题与Hopfield神经网络和玻尔兹曼机等递归神经网络的形式化对应关系,并在此基础上对神经多用户检测器的误码率进行了复制分析。基于统计神经动力学分析了神经多用户检测器的检测动态。
{"title":"Performance analysis of neural CDMA multiuser detector","authors":"Toshiyuki TANAKA","doi":"10.1109/IJCNN.2001.938825","DOIUrl":"https://doi.org/10.1109/IJCNN.2001.938825","url":null,"abstract":"We analyze the performance of neural code-division multiple-access (CDMA) multiuser detectors. Formal correspondence between the CDMA multiuser detection problem and recurrent neural networks such as the Hopfield neural network and the Boltzmann machines is established, based on which the replica analysis on the bit-error rate of the neural multiuser detectors is presented. Detection dynamics of the neural multiuser detectors is also analyzed based on statistical neurodynamics.","PeriodicalId":346955,"journal":{"name":"IJCNN'01. International Joint Conference on Neural Networks. Proceedings (Cat. No.01CH37222)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121217588","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Classifiability based pruning of decision trees 基于可分类性的决策树修剪
M. Dong, R. Kothari
Decision tree pruning is useful in improving the generalization performance of decision trees. As opposed to explicit pruning in which nodes are removed from fully constructed decision trees, implicit pruning uses a stopping criteria to label a node as a leaf node when splitting it further would not result in acceptable improvement in performance. The stopping criteria is often also called the pre-pruning criteria and is typically based on the pattern instances available at node (i.e. local information). We propose a new criteria for pre-pruning based on a classifiability measure. The proposed criteria not only considers the number of pattern instances of different classes at a node (node purity) but also the spatial distribution of these instances to estimate the effect of further splitting the node. The algorithm and some experimental results are presented.
决策树剪枝有助于提高决策树的泛化性能。与从完全构造的决策树中删除节点的显式修剪相反,隐式修剪使用停止标准将节点标记为叶节点,而进一步拆分它不会导致可接受的性能改进。停止标准通常也称为预修剪标准,通常基于节点上可用的模式实例(即本地信息)。提出了一种基于可分类度量的预修剪标准。提出的准则不仅考虑节点上不同类别的模式实例的数量(节点纯度),而且考虑这些实例的空间分布,以估计进一步分割节点的效果。给出了算法和一些实验结果。
{"title":"Classifiability based pruning of decision trees","authors":"M. Dong, R. Kothari","doi":"10.1109/IJCNN.2001.938424","DOIUrl":"https://doi.org/10.1109/IJCNN.2001.938424","url":null,"abstract":"Decision tree pruning is useful in improving the generalization performance of decision trees. As opposed to explicit pruning in which nodes are removed from fully constructed decision trees, implicit pruning uses a stopping criteria to label a node as a leaf node when splitting it further would not result in acceptable improvement in performance. The stopping criteria is often also called the pre-pruning criteria and is typically based on the pattern instances available at node (i.e. local information). We propose a new criteria for pre-pruning based on a classifiability measure. The proposed criteria not only considers the number of pattern instances of different classes at a node (node purity) but also the spatial distribution of these instances to estimate the effect of further splitting the node. The algorithm and some experimental results are presented.","PeriodicalId":346955,"journal":{"name":"IJCNN'01. International Joint Conference on Neural Networks. Proceedings (Cat. No.01CH37222)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121230804","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Improved Hopfield networks by training with noisy data 通过带噪数据训练改进Hopfield网络
F. Clift, T. Martinez
An approach to training a generalized Hopfield network is developed and evaluated. Both the weight symmetricity constraint and the zero self-connection constraint are removed from standard Hopfield networks. Training is accomplished with backpropagation through time, using noisy versions of the memorized patterns. Training in this way is referred to as noisy associative training (NAT). Performance of NAT is evaluated on both random and correlated data. NAT has been tested on several data sets, with a large number of training runs for each experiment. The data sets used include uniformly distributed random data and several data sets adapted from the U.C. Irvine Machine Learning Repository. Results show that for random patterns, Hopfield networks trained with NAT have an average overall recall accuracy 6.1 times greater than networks produced with either Hebbian or pseudo-inverse training. Additionally, these networks have 13% fewer spurious memories on average than networks trained with pseudoinverse or Hebbian training. Typically, networks memorizing over 2N (where N is the number of nodes in the network) patterns are produced. Performance on correlated data shows an even greater improvement over networks produced with either Hebbian or pseudo-inverse training-an average of 27.8 times greater recall accuracy, with 14% fewer spurious memories.
提出并评价了一种训练广义Hopfield网络的方法。从标准Hopfield网络中去除了权对称约束和零自连接约束。训练是通过时间反向传播完成的,使用记忆模式的噪声版本。这种训练方式被称为噪声联想训练(NAT)。在随机数据和相关数据上对NAT的性能进行了评估。NAT已经在几个数据集上进行了测试,每个实验都有大量的训练运行。使用的数据集包括均匀分布的随机数据和来自加州大学欧文分校机器学习存储库的几个数据集。结果表明,对于随机模式,使用NAT训练的Hopfield网络的平均总召回准确率比使用Hebbian或伪逆训练产生的网络高6.1倍。此外,这些网络的虚假记忆平均比使用伪逆训练或Hebbian训练的网络少13%。通常,会产生记忆超过2N个模式的网络(其中N是网络中的节点数)。在相关数据上的表现显示出比使用Hebbian或伪逆训练产生的网络有更大的改进——回忆准确率平均提高27.8倍,虚假记忆减少14%。
{"title":"Improved Hopfield networks by training with noisy data","authors":"F. Clift, T. Martinez","doi":"10.1109/IJCNN.2001.939521","DOIUrl":"https://doi.org/10.1109/IJCNN.2001.939521","url":null,"abstract":"An approach to training a generalized Hopfield network is developed and evaluated. Both the weight symmetricity constraint and the zero self-connection constraint are removed from standard Hopfield networks. Training is accomplished with backpropagation through time, using noisy versions of the memorized patterns. Training in this way is referred to as noisy associative training (NAT). Performance of NAT is evaluated on both random and correlated data. NAT has been tested on several data sets, with a large number of training runs for each experiment. The data sets used include uniformly distributed random data and several data sets adapted from the U.C. Irvine Machine Learning Repository. Results show that for random patterns, Hopfield networks trained with NAT have an average overall recall accuracy 6.1 times greater than networks produced with either Hebbian or pseudo-inverse training. Additionally, these networks have 13% fewer spurious memories on average than networks trained with pseudoinverse or Hebbian training. Typically, networks memorizing over 2N (where N is the number of nodes in the network) patterns are produced. Performance on correlated data shows an even greater improvement over networks produced with either Hebbian or pseudo-inverse training-an average of 27.8 times greater recall accuracy, with 14% fewer spurious memories.","PeriodicalId":346955,"journal":{"name":"IJCNN'01. International Joint Conference on Neural Networks. Proceedings (Cat. No.01CH37222)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127187922","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Specifications and FPGA implementation of a systolic Hopfield-type associative memory 一种收缩hopfield型关联存储器的规格和FPGA实现
I.Z. Mihu, R. Brad, M. Breazu
Neural networks are non-linear static or dynamical systems that learn to solve problems from examples. Most of the learning algorithms require a lot of computing power and, therefore, could benefit from fast dedicated hardware. One of the most common architectures used for this special-purpose hardware is the systolic array. The design and implementation of different neural network architectures in systolic arrays can be complex, however. The paper shows the manner in which the Hopfield neural network can be mapped into a 2-D systolic array and presents an FPGA implementation of the proposed 2-D systolic array.
神经网络是非线性的静态或动态系统,它从实例中学习解决问题。大多数学习算法需要大量的计算能力,因此可以从快速专用硬件中受益。用于这种专用硬件的最常见的体系结构之一是收缩阵列。然而,在收缩阵列中设计和实现不同的神经网络架构可能是复杂的。本文展示了Hopfield神经网络可以映射到二维收缩阵列的方式,并给出了所提出的二维收缩阵列的FPGA实现。
{"title":"Specifications and FPGA implementation of a systolic Hopfield-type associative memory","authors":"I.Z. Mihu, R. Brad, M. Breazu","doi":"10.1109/IJCNN.2001.939022","DOIUrl":"https://doi.org/10.1109/IJCNN.2001.939022","url":null,"abstract":"Neural networks are non-linear static or dynamical systems that learn to solve problems from examples. Most of the learning algorithms require a lot of computing power and, therefore, could benefit from fast dedicated hardware. One of the most common architectures used for this special-purpose hardware is the systolic array. The design and implementation of different neural network architectures in systolic arrays can be complex, however. The paper shows the manner in which the Hopfield neural network can be mapped into a 2-D systolic array and presents an FPGA implementation of the proposed 2-D systolic array.","PeriodicalId":346955,"journal":{"name":"IJCNN'01. International Joint Conference on Neural Networks. Proceedings (Cat. No.01CH37222)","volume":"135 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127477926","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Agent-environment approach to the simulation of Turing machines by neural networks 用神经网络模拟图灵机的Agent-environment方法
W.R. de Oliveira, M.C.P. de Souto, Teresa B Ludermir
We propose a way to simulate Turing machines (TMs) by neural networks (NNs) which is in agreement with the correct interpretation of Turing's analysis of computation; compatible with the current approaches to analyze cognition as an interactive agent-environment process; and physically realizable since it does not use connection weights with unbounded precision. We give a full description of an implementation of a universal TM into a recurrent sigmoid NN focusing on the TM finite state control, leaving the tape, an infinite resource, as an external non-intrinsic feature.
我们提出了一种用神经网络模拟图灵机的方法,这种方法符合图灵计算分析的正确解释;与当前的方法相兼容,将认知分析为一个交互的agent-environment过程;并且在物理上是可实现的,因为它不使用具有无界精度的连接权重。我们给出了一个完整的描述,一个通用的TM实现到一个循环的s型神经网络,专注于TM的有限状态控制,留下磁带,一个无限的资源,作为一个外部的非内在特征。
{"title":"Agent-environment approach to the simulation of Turing machines by neural networks","authors":"W.R. de Oliveira, M.C.P. de Souto, Teresa B Ludermir","doi":"10.1109/IJCNN.2001.938994","DOIUrl":"https://doi.org/10.1109/IJCNN.2001.938994","url":null,"abstract":"We propose a way to simulate Turing machines (TMs) by neural networks (NNs) which is in agreement with the correct interpretation of Turing's analysis of computation; compatible with the current approaches to analyze cognition as an interactive agent-environment process; and physically realizable since it does not use connection weights with unbounded precision. We give a full description of an implementation of a universal TM into a recurrent sigmoid NN focusing on the TM finite state control, leaving the tape, an infinite resource, as an external non-intrinsic feature.","PeriodicalId":346955,"journal":{"name":"IJCNN'01. International Joint Conference on Neural Networks. Proceedings (Cat. No.01CH37222)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125501928","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Effects of initialization on structure formation and generalization of neural networks 初始化对神经网络结构形成和泛化的影响
H. Shiratsuchi, H. Gotanda, K. Inoue, K. Kumamaru
In this paper, we propose an initialization method of multilayer neural networks (NN) employing the structure learning with forgetting. The proposed initialization consists of two steps: weights of hidden units are initialized so that their hyperplanes should pass through the center of input pattern set, and those of output units are initialized to zero. Several simulations were performed to study how the initialization affects the structure forming process of the NN. From the simulation result, it was confirmed that the initialization gives better network structure and higher generalization ability.
本文提出了一种基于遗忘结构学习的多层神经网络初始化方法。所提出的初始化包括两个步骤:初始化隐藏单元的权重,使其超平面通过输入模式集的中心;初始化输出单元的超平面为零。通过仿真研究了初始化对神经网络结构形成过程的影响。仿真结果表明,初始化能得到较好的网络结构和较好的泛化能力。
{"title":"Effects of initialization on structure formation and generalization of neural networks","authors":"H. Shiratsuchi, H. Gotanda, K. Inoue, K. Kumamaru","doi":"10.1109/IJCNN.2001.938787","DOIUrl":"https://doi.org/10.1109/IJCNN.2001.938787","url":null,"abstract":"In this paper, we propose an initialization method of multilayer neural networks (NN) employing the structure learning with forgetting. The proposed initialization consists of two steps: weights of hidden units are initialized so that their hyperplanes should pass through the center of input pattern set, and those of output units are initialized to zero. Several simulations were performed to study how the initialization affects the structure forming process of the NN. From the simulation result, it was confirmed that the initialization gives better network structure and higher generalization ability.","PeriodicalId":346955,"journal":{"name":"IJCNN'01. International Joint Conference on Neural Networks. Proceedings (Cat. No.01CH37222)","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115549601","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Design of new biologically active molecules by recursive neural networks 递归神经网络设计新的生物活性分子
A. Micheli, A. Sperduti, A. Starita, A. Bianucci
In this paper, we face the design of novel molecules belonging to the class of adenine analogues (8-azaadenine derivates), that present a widespread potential therapeutic interest, in the new perspective offered by recursive neural networks for quantitative structure-activity relationships analysis. The generality and flexibility of the method used to process structured domains allows us to propose new solutions to the representation problem of this set of compounds and to obtain good prediction results, as it has been proved by the comparison with the values obtained "a posteriori" after synthesis and biological essays of designed molecules.
在本文中,我们面对的设计属于腺嘌呤类似物类的新分子(8-氮杂腺嘌呤衍生物),呈现出广泛的潜在治疗兴趣,递归神经网络提供了定量结构-活性关系分析的新视角。处理结构域的方法的通用性和灵活性使我们能够对这组化合物的表示问题提出新的解决方案,并获得良好的预测结果,这一点已经通过与合成后“事后”得到的值和设计分子的生物学论文的比较得到证明。
{"title":"Design of new biologically active molecules by recursive neural networks","authors":"A. Micheli, A. Sperduti, A. Starita, A. Bianucci","doi":"10.1109/IJCNN.2001.938805","DOIUrl":"https://doi.org/10.1109/IJCNN.2001.938805","url":null,"abstract":"In this paper, we face the design of novel molecules belonging to the class of adenine analogues (8-azaadenine derivates), that present a widespread potential therapeutic interest, in the new perspective offered by recursive neural networks for quantitative structure-activity relationships analysis. The generality and flexibility of the method used to process structured domains allows us to propose new solutions to the representation problem of this set of compounds and to obtain good prediction results, as it has been proved by the comparison with the values obtained \"a posteriori\" after synthesis and biological essays of designed molecules.","PeriodicalId":346955,"journal":{"name":"IJCNN'01. International Joint Conference on Neural Networks. Proceedings (Cat. No.01CH37222)","volume":"104 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116011139","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
期刊
IJCNN'01. International Joint Conference on Neural Networks. Proceedings (Cat. No.01CH37222)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1