{"title":"Modelling Stock Returns With Neural Networks","authors":"A. Refenes, A. Zapranis, Y. Bentz","doi":"10.1109/NNAT.1993.586052","DOIUrl":null,"url":null,"abstract":"Neural networks have attracted much interest in financial engineering but many multivariate data series remain diflcult to model. In this paper we use a non trivial problem in expsure analysis of share prices to multiple factors to explore the interrelationships among the numerous network and data engineering parameters and we highlight the importance of a careful choice of the indicators used as network inputs. We show how data pre-processing can improve generalisation performance by up to 30.5% and present a \"time-sensitive\" cost function, designed to take into account gradually changing input-output relationships. We give empirical evidence that when it is combined with the right leaMags in the indicators generalisation can be further improved by up to IO. 1 %.","PeriodicalId":164805,"journal":{"name":"Workshop on Neural Network Applications and Tools","volume":"89 6 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1993-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"13","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Workshop on Neural Network Applications and Tools","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/NNAT.1993.586052","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 13
Abstract
Neural networks have attracted much interest in financial engineering but many multivariate data series remain diflcult to model. In this paper we use a non trivial problem in expsure analysis of share prices to multiple factors to explore the interrelationships among the numerous network and data engineering parameters and we highlight the importance of a careful choice of the indicators used as network inputs. We show how data pre-processing can improve generalisation performance by up to 30.5% and present a "time-sensitive" cost function, designed to take into account gradually changing input-output relationships. We give empirical evidence that when it is combined with the right leaMags in the indicators generalisation can be further improved by up to IO. 1 %.