{"title":"Chisini-Jensen-Shannon散度的扩张","authors":"P. Sharma, Gary Holness","doi":"10.1109/DSAA.2016.25","DOIUrl":null,"url":null,"abstract":"Jensen-Shannon divergence (JSD) does not provide adequate separation when the difference between input distributions is subtle. A recently introduced technique, Chisini Jensen Shannon Divergence (CJSD), increases JSD's ability to discriminate between probability distributions by reformulating with operators from Chisini mean. As a consequence, CJSDs also carry additional properties concerning robustness. The utility of this approach was validated in the form of two SVM kernels that give superior classification performance. Our work explores why the performance improvement to JSDs is afforded by this reformulation. We characterize the nature of this improvement based on the idea of relative dilation, that is how Chisini mean transforms JSD's range and prove a number of propositions that establish the degree of this separation. Finally, we provide empirical validation on a synthetic dataset that confirms our theoretical results pertaining to relative dilation.","PeriodicalId":193885,"journal":{"name":"2016 IEEE International Conference on Data Science and Advanced Analytics (DSAA)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2016-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":"{\"title\":\"Dilation of Chisini-Jensen-Shannon Divergence\",\"authors\":\"P. Sharma, Gary Holness\",\"doi\":\"10.1109/DSAA.2016.25\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Jensen-Shannon divergence (JSD) does not provide adequate separation when the difference between input distributions is subtle. A recently introduced technique, Chisini Jensen Shannon Divergence (CJSD), increases JSD's ability to discriminate between probability distributions by reformulating with operators from Chisini mean. As a consequence, CJSDs also carry additional properties concerning robustness. The utility of this approach was validated in the form of two SVM kernels that give superior classification performance. Our work explores why the performance improvement to JSDs is afforded by this reformulation. We characterize the nature of this improvement based on the idea of relative dilation, that is how Chisini mean transforms JSD's range and prove a number of propositions that establish the degree of this separation. Finally, we provide empirical validation on a synthetic dataset that confirms our theoretical results pertaining to relative dilation.\",\"PeriodicalId\":193885,\"journal\":{\"name\":\"2016 IEEE International Conference on Data Science and Advanced Analytics (DSAA)\",\"volume\":\"31 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2016-10-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"6\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2016 IEEE International Conference on Data Science and Advanced Analytics (DSAA)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/DSAA.2016.25\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2016 IEEE International Conference on Data Science and Advanced Analytics (DSAA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/DSAA.2016.25","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Jensen-Shannon divergence (JSD) does not provide adequate separation when the difference between input distributions is subtle. A recently introduced technique, Chisini Jensen Shannon Divergence (CJSD), increases JSD's ability to discriminate between probability distributions by reformulating with operators from Chisini mean. As a consequence, CJSDs also carry additional properties concerning robustness. The utility of this approach was validated in the form of two SVM kernels that give superior classification performance. Our work explores why the performance improvement to JSDs is afforded by this reformulation. We characterize the nature of this improvement based on the idea of relative dilation, that is how Chisini mean transforms JSD's range and prove a number of propositions that establish the degree of this separation. Finally, we provide empirical validation on a synthetic dataset that confirms our theoretical results pertaining to relative dilation.