{"title":"散度损失函数下矩阵变量正态分布均值矩阵的极大极小估计","authors":"S. Zinodiny, Sadegh Rezaei, S. Nadarajah","doi":"10.6092/ISSN.1973-2201/6956","DOIUrl":null,"url":null,"abstract":"The problem of estimating the mean matrix of a matrix-variate normal distribution with a covariance matrix is considered under two loss functions. We construct a class of empirical Bayes estimators which are better than the maximum likelihood estimator under the first loss function and hence show that the maximum likelihood estimator is inadmissible. We find a general class of minimax estimators. Also we give a class of estimators that improve on the maximum likelihood estimator under the second loss function and hence show that the maximum likelihood estimator is inadmissible.","PeriodicalId":45117,"journal":{"name":"Statistica","volume":"77 1","pages":"369-384"},"PeriodicalIF":1.6000,"publicationDate":"2018-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"Minimax Estimation of the Mean Matrix of the Matrix Variate Normal Distribution under the Divergence Loss Function\",\"authors\":\"S. Zinodiny, Sadegh Rezaei, S. Nadarajah\",\"doi\":\"10.6092/ISSN.1973-2201/6956\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The problem of estimating the mean matrix of a matrix-variate normal distribution with a covariance matrix is considered under two loss functions. We construct a class of empirical Bayes estimators which are better than the maximum likelihood estimator under the first loss function and hence show that the maximum likelihood estimator is inadmissible. We find a general class of minimax estimators. Also we give a class of estimators that improve on the maximum likelihood estimator under the second loss function and hence show that the maximum likelihood estimator is inadmissible.\",\"PeriodicalId\":45117,\"journal\":{\"name\":\"Statistica\",\"volume\":\"77 1\",\"pages\":\"369-384\"},\"PeriodicalIF\":1.6000,\"publicationDate\":\"2018-03-29\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Statistica\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.6092/ISSN.1973-2201/6956\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"STATISTICS & PROBABILITY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Statistica","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.6092/ISSN.1973-2201/6956","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"STATISTICS & PROBABILITY","Score":null,"Total":0}
Minimax Estimation of the Mean Matrix of the Matrix Variate Normal Distribution under the Divergence Loss Function
The problem of estimating the mean matrix of a matrix-variate normal distribution with a covariance matrix is considered under two loss functions. We construct a class of empirical Bayes estimators which are better than the maximum likelihood estimator under the first loss function and hence show that the maximum likelihood estimator is inadmissible. We find a general class of minimax estimators. Also we give a class of estimators that improve on the maximum likelihood estimator under the second loss function and hence show that the maximum likelihood estimator is inadmissible.