Gholamali Aminian;Saeed Masiha;Laura Toni;Miguel R. D. Rodrigues
{"title":"通过辅助分布实现学习算法泛化误差边界","authors":"Gholamali Aminian;Saeed Masiha;Laura Toni;Miguel R. D. Rodrigues","doi":"10.1109/JSAIT.2024.3391900","DOIUrl":null,"url":null,"abstract":"Generalization error bounds are essential for comprehending how well machine learning models work. In this work, we suggest a novel method, i.e., the Auxiliary Distribution Method, that leads to new upper bounds on expected generalization errors that are appropriate for supervised learning scenarios. We show that our general upper bounds can be specialized under some conditions to new bounds involving the \n<inline-formula> <tex-math>$\\alpha $ </tex-math></inline-formula>\n-Jensen-Shannon, \n<inline-formula> <tex-math>$\\alpha $ </tex-math></inline-formula>\n-Rényi \n<inline-formula> <tex-math>$(0\\lt \\alpha \\lt 1)$ </tex-math></inline-formula>\n information between a random variable modeling the set of training samples and another random variable modeling the set of hypotheses. Our upper bounds based on \n<inline-formula> <tex-math>$\\alpha $ </tex-math></inline-formula>\n-Jensen-Shannon information are also finite. Additionally, we demonstrate how our auxiliary distribution method can be used to derive the upper bounds on excess risk of some learning algorithms in the supervised learning context and the generalization error under the distribution mismatch scenario in supervised learning algorithms, where the distribution mismatch is modeled as \n<inline-formula> <tex-math>$\\alpha $ </tex-math></inline-formula>\n-Jensen-Shannon or \n<inline-formula> <tex-math>$\\alpha $ </tex-math></inline-formula>\n-Rényi divergence between the distribution of test and training data samples distributions. We also outline the conditions for which our proposed upper bounds might be tighter than other earlier upper bounds.","PeriodicalId":73295,"journal":{"name":"IEEE journal on selected areas in information theory","volume":"5 ","pages":"273-284"},"PeriodicalIF":0.0000,"publicationDate":"2024-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Learning Algorithm Generalization Error Bounds via Auxiliary Distributions\",\"authors\":\"Gholamali Aminian;Saeed Masiha;Laura Toni;Miguel R. D. Rodrigues\",\"doi\":\"10.1109/JSAIT.2024.3391900\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Generalization error bounds are essential for comprehending how well machine learning models work. In this work, we suggest a novel method, i.e., the Auxiliary Distribution Method, that leads to new upper bounds on expected generalization errors that are appropriate for supervised learning scenarios. We show that our general upper bounds can be specialized under some conditions to new bounds involving the \\n<inline-formula> <tex-math>$\\\\alpha $ </tex-math></inline-formula>\\n-Jensen-Shannon, \\n<inline-formula> <tex-math>$\\\\alpha $ </tex-math></inline-formula>\\n-Rényi \\n<inline-formula> <tex-math>$(0\\\\lt \\\\alpha \\\\lt 1)$ </tex-math></inline-formula>\\n information between a random variable modeling the set of training samples and another random variable modeling the set of hypotheses. Our upper bounds based on \\n<inline-formula> <tex-math>$\\\\alpha $ </tex-math></inline-formula>\\n-Jensen-Shannon information are also finite. Additionally, we demonstrate how our auxiliary distribution method can be used to derive the upper bounds on excess risk of some learning algorithms in the supervised learning context and the generalization error under the distribution mismatch scenario in supervised learning algorithms, where the distribution mismatch is modeled as \\n<inline-formula> <tex-math>$\\\\alpha $ </tex-math></inline-formula>\\n-Jensen-Shannon or \\n<inline-formula> <tex-math>$\\\\alpha $ </tex-math></inline-formula>\\n-Rényi divergence between the distribution of test and training data samples distributions. We also outline the conditions for which our proposed upper bounds might be tighter than other earlier upper bounds.\",\"PeriodicalId\":73295,\"journal\":{\"name\":\"IEEE journal on selected areas in information theory\",\"volume\":\"5 \",\"pages\":\"273-284\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-04-25\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE journal on selected areas in information theory\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10508532/\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE journal on selected areas in information theory","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10508532/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Learning Algorithm Generalization Error Bounds via Auxiliary Distributions
Generalization error bounds are essential for comprehending how well machine learning models work. In this work, we suggest a novel method, i.e., the Auxiliary Distribution Method, that leads to new upper bounds on expected generalization errors that are appropriate for supervised learning scenarios. We show that our general upper bounds can be specialized under some conditions to new bounds involving the
$\alpha $
-Jensen-Shannon,
$\alpha $
-Rényi
$(0\lt \alpha \lt 1)$
information between a random variable modeling the set of training samples and another random variable modeling the set of hypotheses. Our upper bounds based on
$\alpha $
-Jensen-Shannon information are also finite. Additionally, we demonstrate how our auxiliary distribution method can be used to derive the upper bounds on excess risk of some learning algorithms in the supervised learning context and the generalization error under the distribution mismatch scenario in supervised learning algorithms, where the distribution mismatch is modeled as
$\alpha $
-Jensen-Shannon or
$\alpha $
-Rényi divergence between the distribution of test and training data samples distributions. We also outline the conditions for which our proposed upper bounds might be tighter than other earlier upper bounds.