{"title":"并行多元线性回归的速度与冗余:实证研究","authors":"Mingxian Xu, J. J. Miller, E. Wegman","doi":"10.1109/DMCC.1990.555395","DOIUrl":null,"url":null,"abstract":"The purpose of this paper is to present a parallel implementation of multiple linear regression. We discuss the multiple linear regression model. Traditionally parallelism has been used for either speed-up or redundancy (hence reliability). With stochastic data, by clever parsing and algorithm development, it is possible to achieve both speed and reliability enhancement. We demonstrate this with multiple linear regression. Other examples include kernel estimation and bootstrapping.","PeriodicalId":204431,"journal":{"name":"Proceedings of the Fifth Distributed Memory Computing Conference, 1990.","volume":"56 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1990-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":"{\"title\":\"Parallelizing Multiple Linear Regression for Speed and Redundancy: An Empirical Study\",\"authors\":\"Mingxian Xu, J. J. Miller, E. Wegman\",\"doi\":\"10.1109/DMCC.1990.555395\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The purpose of this paper is to present a parallel implementation of multiple linear regression. We discuss the multiple linear regression model. Traditionally parallelism has been used for either speed-up or redundancy (hence reliability). With stochastic data, by clever parsing and algorithm development, it is possible to achieve both speed and reliability enhancement. We demonstrate this with multiple linear regression. Other examples include kernel estimation and bootstrapping.\",\"PeriodicalId\":204431,\"journal\":{\"name\":\"Proceedings of the Fifth Distributed Memory Computing Conference, 1990.\",\"volume\":\"56 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"1990-04-08\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"6\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the Fifth Distributed Memory Computing Conference, 1990.\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/DMCC.1990.555395\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the Fifth Distributed Memory Computing Conference, 1990.","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/DMCC.1990.555395","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Parallelizing Multiple Linear Regression for Speed and Redundancy: An Empirical Study
The purpose of this paper is to present a parallel implementation of multiple linear regression. We discuss the multiple linear regression model. Traditionally parallelism has been used for either speed-up or redundancy (hence reliability). With stochastic data, by clever parsing and algorithm development, it is possible to achieve both speed and reliability enhancement. We demonstrate this with multiple linear regression. Other examples include kernel estimation and bootstrapping.