首页 > 最新文献

2015 International Joint Conference on Neural Networks (IJCNN)最新文献

英文 中文
Probabilistic Relational Models with clustering uncertainty 具有聚类不确定性的概率关系模型
Pub Date : 2015-07-12 DOI: 10.1109/IJCNN.2015.7280355
A. Coutant, Philippe Leray, H. L. Capitaine
Many machine learning algorithms aim at finding pattern in propositional data, where individuals are all supposed i.i.d. However, the massive usage of relational databases makes multi-relational datasets widespread, and the i.i.d. assumptions are often not reasonable in such data, thus requiring dedicated algorithms. Accurate and efficient learning in such datasets is an important challenge with multiples applications including collective classification and link prediction. Probabilistic Relational Models (PRM) are directed lifted graphical models which generalize Bayesian networks in the relational setting. In this paper, we propose a new PRM extension, named PRM with clustering uncertainty, which overcomes several limitations of PRM with reference uncertainty (PRM-RU) extension, such as the possibility to reason about some individual's cluster membership and use co-clustering to improve association variable dependencies. We also propose a structure learning algorithm for these models and show that these improvements allow: i) better prediction results compared to PRM-RU; ii) in less running time.
许多机器学习算法的目标是在命题数据中寻找模式,其中个体都被假定为id。然而,关系数据库的大量使用使得多关系数据集广泛存在,并且在这些数据中,id假设往往不合理,因此需要专用算法。在这样的数据集上准确和高效的学习是包括集体分类和链接预测在内的多个应用的重要挑战。概率关系模型(PRM)是一种有向提升的图形模型,它将贝叶斯网络推广到关系设置中。本文提出了一种新的PRM扩展,即具有聚类不确定性的PRM,它克服了具有参考不确定性的PRM (PRM- ru)扩展的一些局限性,例如可以推断某些个体的集群隶属关系和使用共聚类来改善关联变量的依赖性。我们还为这些模型提出了一种结构学习算法,并表明这些改进允许:i)与PRM-RU相比,预测结果更好;Ii)运行时间更短。
{"title":"Probabilistic Relational Models with clustering uncertainty","authors":"A. Coutant, Philippe Leray, H. L. Capitaine","doi":"10.1109/IJCNN.2015.7280355","DOIUrl":"https://doi.org/10.1109/IJCNN.2015.7280355","url":null,"abstract":"Many machine learning algorithms aim at finding pattern in propositional data, where individuals are all supposed i.i.d. However, the massive usage of relational databases makes multi-relational datasets widespread, and the i.i.d. assumptions are often not reasonable in such data, thus requiring dedicated algorithms. Accurate and efficient learning in such datasets is an important challenge with multiples applications including collective classification and link prediction. Probabilistic Relational Models (PRM) are directed lifted graphical models which generalize Bayesian networks in the relational setting. In this paper, we propose a new PRM extension, named PRM with clustering uncertainty, which overcomes several limitations of PRM with reference uncertainty (PRM-RU) extension, such as the possibility to reason about some individual's cluster membership and use co-clustering to improve association variable dependencies. We also propose a structure learning algorithm for these models and show that these improvements allow: i) better prediction results compared to PRM-RU; ii) in less running time.","PeriodicalId":6539,"journal":{"name":"2015 International Joint Conference on Neural Networks (IJCNN)","volume":"1 1","pages":"1-8"},"PeriodicalIF":0.0,"publicationDate":"2015-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89862053","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Faster reinforcement learning after pretraining deep networks to predict state dynamics 预训练深度网络后更快的强化学习来预测状态动态
Pub Date : 2015-07-12 DOI: 10.1109/IJCNN.2015.7280824
C. Anderson, Minwoo Lee, D. Elliott
Deep learning algorithms have recently appeared that pretrain hidden layers of neural networks in unsupervised ways, leading to state-of-the-art performance on large classification problems. These methods can also pretrain networks used for reinforcement learning. However, this ignores the additional information that exists in a reinforcement learning paradigm via the ongoing sequence of state, action, new state tuples. This paper demonstrates that learning a predictive model of state dynamics can result in a pretrained hidden layer structure that reduces the time needed to solve reinforcement learning problems.
最近出现了深度学习算法,以无监督的方式预训练神经网络的隐藏层,从而在大型分类问题上取得了最先进的性能。这些方法也可以预训练用于强化学习的网络。然而,这忽略了通过状态、动作、新状态元组的持续序列存在于强化学习范式中的附加信息。本文证明,学习状态动力学的预测模型可以产生预训练的隐藏层结构,从而减少解决强化学习问题所需的时间。
{"title":"Faster reinforcement learning after pretraining deep networks to predict state dynamics","authors":"C. Anderson, Minwoo Lee, D. Elliott","doi":"10.1109/IJCNN.2015.7280824","DOIUrl":"https://doi.org/10.1109/IJCNN.2015.7280824","url":null,"abstract":"Deep learning algorithms have recently appeared that pretrain hidden layers of neural networks in unsupervised ways, leading to state-of-the-art performance on large classification problems. These methods can also pretrain networks used for reinforcement learning. However, this ignores the additional information that exists in a reinforcement learning paradigm via the ongoing sequence of state, action, new state tuples. This paper demonstrates that learning a predictive model of state dynamics can result in a pretrained hidden layer structure that reduces the time needed to solve reinforcement learning problems.","PeriodicalId":6539,"journal":{"name":"2015 International Joint Conference on Neural Networks (IJCNN)","volume":"50 1","pages":"1-7"},"PeriodicalIF":0.0,"publicationDate":"2015-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89874412","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 42
Generalized constraint neural network regression model subject to equality function constraints 受等式函数约束的广义约束神经网络回归模型
Pub Date : 2015-07-12 DOI: 10.1109/IJCNN.2015.7280507
Linlin Cao, Bao-Gang Hu
This paper describes a progress of the previous study on the generalized constraint neural networks (GCNN). The GCNN model aims to utilize any type of priors in an explicate form so that the model can achieve improved performance and better transparency. A specific type of priors, that is, equality function constraints, is investigated in this work. When the existing approaches impose the constrains in a discretized means on the given function, our approach, called GCNN-EF, is able to satisfy the constrain perfectly and completely on the equation. We realize GCNN-EF by a weighted combination of the output of the conventional radial basis function neural network (RBFNN) and the output expressed by the constraints. Numerical studies are conducted on three synthetic data sets in comparing with other existing approaches. Simulation results demonstrate the benefit and efficiency using GCNN-EF.
本文综述了广义约束神经网络(GCNN)的研究进展。GCNN模型旨在以一种明确的形式利用任何类型的先验,从而使模型获得更好的性能和更好的透明度。本文研究了一类特殊的先验,即等式函数约束。当现有的方法以离散化的方式对给定函数施加约束时,我们的方法GCNN-EF能够完全满足方程上的约束。我们通过将传统径向基函数神经网络(RBFNN)的输出与约束表示的输出加权组合来实现GCNN-EF。对三种合成数据集进行了数值研究,并与已有方法进行了比较。仿真结果验证了该方法的有效性和优越性。
{"title":"Generalized constraint neural network regression model subject to equality function constraints","authors":"Linlin Cao, Bao-Gang Hu","doi":"10.1109/IJCNN.2015.7280507","DOIUrl":"https://doi.org/10.1109/IJCNN.2015.7280507","url":null,"abstract":"This paper describes a progress of the previous study on the generalized constraint neural networks (GCNN). The GCNN model aims to utilize any type of priors in an explicate form so that the model can achieve improved performance and better transparency. A specific type of priors, that is, equality function constraints, is investigated in this work. When the existing approaches impose the constrains in a discretized means on the given function, our approach, called GCNN-EF, is able to satisfy the constrain perfectly and completely on the equation. We realize GCNN-EF by a weighted combination of the output of the conventional radial basis function neural network (RBFNN) and the output expressed by the constraints. Numerical studies are conducted on three synthetic data sets in comparing with other existing approaches. Simulation results demonstrate the benefit and efficiency using GCNN-EF.","PeriodicalId":6539,"journal":{"name":"2015 International Joint Conference on Neural Networks (IJCNN)","volume":"23 1","pages":"1-8"},"PeriodicalIF":0.0,"publicationDate":"2015-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86514627","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
The unbalancing effect of hubs on K-medoids clustering in high-dimensional spaces 高维空间中集线器对k -介质聚类的不平衡效应
Pub Date : 2015-07-12 DOI: 10.1109/IJCNN.2015.7280303
Dominik Schnitzer, A. Flexer
Unbalanced cluster solutions are affected by very different cluster sizes, with some clusters being very large while others contain almost no data. We demonstrate that this phenomenon is connected to `hubness', a recently discovered general problem of machine learning in high dimensional data spaces. Hub objects have a small distance to an exceptionally large number of data points, and anti-hubs are far from all other data points. In an empirical study of K-medoids clustering we show that hubness gives rise to very unbalanced cluster sizes resulting in impaired internal and external evaluation indices. We compare three methods which reduce hubness in the distance spaces and show that with the balancing of the clusters evaluation indices improve. This is done using artificial and real data sets from diverse domains.
不平衡集群解决方案受到非常不同的集群大小的影响,有些集群非常大,而另一些集群几乎不包含数据。我们证明这种现象与“中心”有关,这是最近发现的高维数据空间中机器学习的一般问题。集线器对象与异常大量的数据点之间的距离很小,而反集线器对象与所有其他数据点之间的距离很远。在k - medioid聚类的实证研究中,我们发现中心度会导致簇大小非常不平衡,从而导致内部和外部评价指标受损。对比了三种减少距离空间中心度的方法,结果表明,随着聚类的平衡,评价指标有所提高。这是使用来自不同领域的人工和真实数据集完成的。
{"title":"The unbalancing effect of hubs on K-medoids clustering in high-dimensional spaces","authors":"Dominik Schnitzer, A. Flexer","doi":"10.1109/IJCNN.2015.7280303","DOIUrl":"https://doi.org/10.1109/IJCNN.2015.7280303","url":null,"abstract":"Unbalanced cluster solutions are affected by very different cluster sizes, with some clusters being very large while others contain almost no data. We demonstrate that this phenomenon is connected to `hubness', a recently discovered general problem of machine learning in high dimensional data spaces. Hub objects have a small distance to an exceptionally large number of data points, and anti-hubs are far from all other data points. In an empirical study of K-medoids clustering we show that hubness gives rise to very unbalanced cluster sizes resulting in impaired internal and external evaluation indices. We compare three methods which reduce hubness in the distance spaces and show that with the balancing of the clusters evaluation indices improve. This is done using artificial and real data sets from diverse domains.","PeriodicalId":6539,"journal":{"name":"2015 International Joint Conference on Neural Networks (IJCNN)","volume":"30 1","pages":"1-8"},"PeriodicalIF":0.0,"publicationDate":"2015-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86712445","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
Growing Hierarchical Trees for Data Stream clustering and visualization 用于数据流聚类和可视化的分层树生长
Pub Date : 2015-07-12 DOI: 10.1109/IJCNN.2015.7280397
Nhat-Quang Doan, M. Ghesmoune, Hanene Azzag, M. Lebbah
Data stream clustering aims at studying large volumes of data that arrive continuously and the objective is to build a good clustering of the stream, using a small amount of memory and time. Visualization is still a big challenge for large data streams. In this paper we present a new approach using a hierarchical and topological structure (or network) for both clustering and visualization. The topological network is represented by a graph in which each neuron represents a set of similar data points and neighbor neurons are connected by edges. The hierarchical component consists of multiple tree-like hierarchic of clusters which allow to describe the evolution of data stream, and then analyze explicitly their similarity. This adaptive structure can be exploited by descending top-down from the topological level to any hierarchical level. The performance of the proposed algorithm is evaluated on both synthetic and real-world datasets.
数据流聚类的目的是研究连续到达的大量数据,目的是在使用少量内存和时间的情况下,对数据流进行良好的聚类。可视化对于大数据流来说仍然是一个巨大的挑战。在本文中,我们提出了一种使用层次和拓扑结构(或网络)进行聚类和可视化的新方法。拓扑网络由一个图表示,其中每个神经元代表一组相似的数据点,相邻神经元通过边连接。层次组件由多个树状层次的聚类组成,可以描述数据流的演化过程,然后显式地分析它们的相似度。这种自适应结构可以通过自顶向下从拓扑层下降到任何层次结构层来利用。在合成数据集和真实数据集上对该算法的性能进行了评估。
{"title":"Growing Hierarchical Trees for Data Stream clustering and visualization","authors":"Nhat-Quang Doan, M. Ghesmoune, Hanene Azzag, M. Lebbah","doi":"10.1109/IJCNN.2015.7280397","DOIUrl":"https://doi.org/10.1109/IJCNN.2015.7280397","url":null,"abstract":"Data stream clustering aims at studying large volumes of data that arrive continuously and the objective is to build a good clustering of the stream, using a small amount of memory and time. Visualization is still a big challenge for large data streams. In this paper we present a new approach using a hierarchical and topological structure (or network) for both clustering and visualization. The topological network is represented by a graph in which each neuron represents a set of similar data points and neighbor neurons are connected by edges. The hierarchical component consists of multiple tree-like hierarchic of clusters which allow to describe the evolution of data stream, and then analyze explicitly their similarity. This adaptive structure can be exploited by descending top-down from the topological level to any hierarchical level. The performance of the proposed algorithm is evaluated on both synthetic and real-world datasets.","PeriodicalId":6539,"journal":{"name":"2015 International Joint Conference on Neural Networks (IJCNN)","volume":"1296 1","pages":"1-8"},"PeriodicalIF":0.0,"publicationDate":"2015-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86485565","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Automatic model redundancy reduction for fast back-propagation for deep neural networks in speech recognition 语音识别中深度神经网络快速反向传播的自动模型冗余削减
Pub Date : 2015-07-12 DOI: 10.1109/IJCNN.2015.7280335
Y. Qian, Tianxing He, Wei Deng, Kai Yu
Although deep neural networks (DNNs) have achieved great performance gain, the immense computational cost of DNN model training has become a major block to utilize massive speech data for DNN training. Previous research on DNN training acceleration mostly focussed on hardware-based parallelization. In this paper, node pruning and arc restructuring are proposed to explore model redundancy after a novel lightly discriminative pretraining process. With some measures of node/arc importance, model redundancies are automatically removed to form a much more compact DNN. This significantly accelerates the subsequent back-propagation (BP) training process. Model redundancy reduction can be combined with multiple GPU parallelization to achieve further acceleration. Experiments showed that the combined acceleration framework can achieve about 85% model size reduction and over 4.2 times speed-up factor for BP training on 2 GPUs, at no loss of recognition accuracy.
尽管深度神经网络(deep neural networks, DNN)已经取得了很大的性能提升,但DNN模型训练的巨大计算成本已经成为阻碍利用海量语音数据进行DNN训练的主要障碍。以往关于深度神经网络训练加速的研究主要集中在基于硬件的并行化上。本文提出了一种新的轻判别预训练过程,通过节点修剪和圆弧重构来探索模型冗余。通过一些节点/弧重要性的度量,模型冗余被自动去除,形成一个更紧凑的DNN。这大大加快了随后的反向传播(BP)训练过程。模型冗余减少可以与多个GPU并行化相结合,以实现进一步的加速。实验表明,在不影响识别精度的情况下,该组合加速框架在2个gpu上进行BP训练,可以实现约85%的模型尺寸缩减和超过4.2倍的加速系数。
{"title":"Automatic model redundancy reduction for fast back-propagation for deep neural networks in speech recognition","authors":"Y. Qian, Tianxing He, Wei Deng, Kai Yu","doi":"10.1109/IJCNN.2015.7280335","DOIUrl":"https://doi.org/10.1109/IJCNN.2015.7280335","url":null,"abstract":"Although deep neural networks (DNNs) have achieved great performance gain, the immense computational cost of DNN model training has become a major block to utilize massive speech data for DNN training. Previous research on DNN training acceleration mostly focussed on hardware-based parallelization. In this paper, node pruning and arc restructuring are proposed to explore model redundancy after a novel lightly discriminative pretraining process. With some measures of node/arc importance, model redundancies are automatically removed to form a much more compact DNN. This significantly accelerates the subsequent back-propagation (BP) training process. Model redundancy reduction can be combined with multiple GPU parallelization to achieve further acceleration. Experiments showed that the combined acceleration framework can achieve about 85% model size reduction and over 4.2 times speed-up factor for BP training on 2 GPUs, at no loss of recognition accuracy.","PeriodicalId":6539,"journal":{"name":"2015 International Joint Conference on Neural Networks (IJCNN)","volume":"1298 1","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2015-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86486291","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Model of associative memory based on antibody chain with one-dimensional chaotic dynamical system 基于抗体链的一维混沌动力系统联想记忆模型
Pub Date : 2015-07-12 DOI: 10.1109/IJCNN.2015.7280647
C. Ou
Immune memory of antigens are formed as limit behavior of cyclic idiotypic immune networks equipped with antibody dynamics. Immune memory mechanism is studied by combining network structure and dynamical systems. Moreover, associative memory can be explored by network dynamics determined by affinity index of antibody chain. Antibody chains with larger affinity indexes generate associative immune memory.
抗原的免疫记忆是具有抗体动力学的循环独特型免疫网络的极限行为。将网络结构与动力系统相结合,研究免疫记忆机制。此外,可以通过抗体链亲和指数确定的网络动力学来探索联想记忆。亲和指数较大的抗体链产生联想免疫记忆。
{"title":"Model of associative memory based on antibody chain with one-dimensional chaotic dynamical system","authors":"C. Ou","doi":"10.1109/IJCNN.2015.7280647","DOIUrl":"https://doi.org/10.1109/IJCNN.2015.7280647","url":null,"abstract":"Immune memory of antigens are formed as limit behavior of cyclic idiotypic immune networks equipped with antibody dynamics. Immune memory mechanism is studied by combining network structure and dynamical systems. Moreover, associative memory can be explored by network dynamics determined by affinity index of antibody chain. Antibody chains with larger affinity indexes generate associative immune memory.","PeriodicalId":6539,"journal":{"name":"2015 International Joint Conference on Neural Networks (IJCNN)","volume":"1 1","pages":"1-7"},"PeriodicalIF":0.0,"publicationDate":"2015-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83664277","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A parameterless mixture model for large margin classification 大余量分类的无参数混合模型
Pub Date : 2015-07-12 DOI: 10.1109/IJCNN.2015.7280782
L. Torres, C. Castro, A. Braga
This paper presents a geometrical approach for obtaining large margin classifiers. The method aims at exploring the geometrical properties of the dataset from the structure of a Gabriel graph, which represents pattern relations according to a given distance metric, such as the Euclidean distance. Once the graph is generated, geometric vectors, analogous to SVM's support vectors are obtained in order to yield the final large margin solution from a mixture model approach. A preliminary experimental study with five real-world benchmarks showed that the method is promising.
本文提出了一种获取大边缘分类器的几何方法。该方法旨在从加布里埃尔图的结构探索数据集的几何属性,加布里埃尔图根据给定的距离度量(如欧几里得距离)表示模式关系。一旦图形生成,几何向量,类似于支持向量机的支持向量获得,以产生最终的大余量解决方案,从混合模型的方法。初步的实验研究与五个现实世界的基准表明,该方法是有前途的。
{"title":"A parameterless mixture model for large margin classification","authors":"L. Torres, C. Castro, A. Braga","doi":"10.1109/IJCNN.2015.7280782","DOIUrl":"https://doi.org/10.1109/IJCNN.2015.7280782","url":null,"abstract":"This paper presents a geometrical approach for obtaining large margin classifiers. The method aims at exploring the geometrical properties of the dataset from the structure of a Gabriel graph, which represents pattern relations according to a given distance metric, such as the Euclidean distance. Once the graph is generated, geometric vectors, analogous to SVM's support vectors are obtained in order to yield the final large margin solution from a mixture model approach. A preliminary experimental study with five real-world benchmarks showed that the method is promising.","PeriodicalId":6539,"journal":{"name":"2015 International Joint Conference on Neural Networks (IJCNN)","volume":"45 1","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2015-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88167963","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Forecasting solar power generated by grid connected PV systems using ensembles of neural networks 利用神经网络预测并网光伏发电系统的太阳能发电量
Pub Date : 2015-07-12 DOI: 10.1109/IJCNN.2015.7280574
Mashud Rana, I. Koprinska, V. Agelidis
Forecasting solar power generated from photovoltaic systems at different time intervals is necessary for ensuring reliable and economic operation of the electricity grid. In this paper, we study the application of neural networks for predicting the next day photovoltaic power outputs in 30 minutes intervals from the previous values, without using any exogenous data. We propose three different approaches based on ensembles of neural networks - two non-iterative and one iterative. We evaluate the performance of these approaches using four Australian solar datasets for one year. This includes assessing predictive accuracy, evaluating the benefit of using an ensemble, and comparing performance with two persistence models used as baselines and a prediction model based on support vector regression. The results show that among the three proposed approaches, the iterative approach was the most accurate and it also outperformed all other methods used for comparison.
对不同时间间隔的光伏发电进行预测是保证电网可靠经济运行的必要条件。在本文中,我们研究了神经网络在不使用任何外源数据的情况下,以前一个值为间隔30分钟预测第二天光伏发电量的应用。我们提出了基于神经网络集成的三种不同的方法-两种非迭代和一种迭代。我们使用四个澳大利亚一年的太阳数据集来评估这些方法的性能。这包括评估预测准确性、评估使用集成的好处,以及与用作基线的两种持久性模型和基于支持向量回归的预测模型进行性能比较。结果表明,在三种方法中,迭代法是最准确的,并且优于所有其他方法进行比较。
{"title":"Forecasting solar power generated by grid connected PV systems using ensembles of neural networks","authors":"Mashud Rana, I. Koprinska, V. Agelidis","doi":"10.1109/IJCNN.2015.7280574","DOIUrl":"https://doi.org/10.1109/IJCNN.2015.7280574","url":null,"abstract":"Forecasting solar power generated from photovoltaic systems at different time intervals is necessary for ensuring reliable and economic operation of the electricity grid. In this paper, we study the application of neural networks for predicting the next day photovoltaic power outputs in 30 minutes intervals from the previous values, without using any exogenous data. We propose three different approaches based on ensembles of neural networks - two non-iterative and one iterative. We evaluate the performance of these approaches using four Australian solar datasets for one year. This includes assessing predictive accuracy, evaluating the benefit of using an ensemble, and comparing performance with two persistence models used as baselines and a prediction model based on support vector regression. The results show that among the three proposed approaches, the iterative approach was the most accurate and it also outperformed all other methods used for comparison.","PeriodicalId":6539,"journal":{"name":"2015 International Joint Conference on Neural Networks (IJCNN)","volume":"70 1","pages":"1-8"},"PeriodicalIF":0.0,"publicationDate":"2015-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85798080","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 43
Quasi-Newton learning methods for complex-valued neural networks 复值神经网络的准牛顿学习方法
Pub Date : 2015-07-12 DOI: 10.1109/IJCNN.2015.7280450
Călin-Adrian Popa
This paper presents the full deduction of the quasi-Newton learning methods for complex-valued feedforward neural networks. Since these algorithms yielded better training results for the real-valued case, an extension to the complex-valued case is a natural option to enhance the performance of the complex backpropagation algorithm. The training methods are exemplified on various well-known synthetic and real-world applications. Experimental results show a significant improvement over the complex gradient descent algorithm.
本文给出了复值前馈神经网络的拟牛顿学习方法的完整推导。由于这些算法在实值情况下产生了更好的训练结果,因此扩展到复值情况是提高复杂反向传播算法性能的自然选择。这些训练方法在各种众所周知的综合应用和实际应用中得到了举例说明。实验结果表明,该算法比复杂梯度下降算法有明显的改进。
{"title":"Quasi-Newton learning methods for complex-valued neural networks","authors":"Călin-Adrian Popa","doi":"10.1109/IJCNN.2015.7280450","DOIUrl":"https://doi.org/10.1109/IJCNN.2015.7280450","url":null,"abstract":"This paper presents the full deduction of the quasi-Newton learning methods for complex-valued feedforward neural networks. Since these algorithms yielded better training results for the real-valued case, an extension to the complex-valued case is a natural option to enhance the performance of the complex backpropagation algorithm. The training methods are exemplified on various well-known synthetic and real-world applications. Experimental results show a significant improvement over the complex gradient descent algorithm.","PeriodicalId":6539,"journal":{"name":"2015 International Joint Conference on Neural Networks (IJCNN)","volume":"44 1","pages":"1-8"},"PeriodicalIF":0.0,"publicationDate":"2015-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86873635","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
期刊
2015 International Joint Conference on Neural Networks (IJCNN)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1