{"title":"可视化隐式模型选择权衡","authors":"Zezhen He, Yaron Shaposhnik","doi":"10.1613/jair.1.13764","DOIUrl":null,"url":null,"abstract":"The recent rise of machine learning (ML) has been leveraged by practitioners and researchers to provide new solutions to an ever growing number of business problems. As with other ML applications, these solutions rely on model selection, which is typically achieved by evaluating certain metrics on models separately and selecting the model whose evaluations (i.e., accuracy-related loss and/or certain interpretability measures) are optimal. However, empirical evidence suggests that, in practice, multiple models often attain competitive results. Therefore, while models’ overall performance could be similar, they could operate quite differently. This results in an implicit tradeoff in models’ performance throughout the feature space which resolving requires new model selection tools. This paper explores methods for comparing predictive models in an interpretable manner to uncover the tradeoff and help resolve it. To this end, we propose various methods that synthesize ideas from supervised learning, unsupervised learning, dimensionality reduction, and visualization to demonstrate how they can be used to inform model developers about the model selection process. Using various datasets and a simple Python interface, we demonstrate how practitioners and researchers could benefit from applying these approaches to better understand the broader impact of their model selection choices.","PeriodicalId":54877,"journal":{"name":"Journal of Artificial Intelligence Research","volume":"291 1","pages":"0"},"PeriodicalIF":4.5000,"publicationDate":"2023-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Visualizing the Implicit Model Selection Tradeoff\",\"authors\":\"Zezhen He, Yaron Shaposhnik\",\"doi\":\"10.1613/jair.1.13764\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The recent rise of machine learning (ML) has been leveraged by practitioners and researchers to provide new solutions to an ever growing number of business problems. As with other ML applications, these solutions rely on model selection, which is typically achieved by evaluating certain metrics on models separately and selecting the model whose evaluations (i.e., accuracy-related loss and/or certain interpretability measures) are optimal. However, empirical evidence suggests that, in practice, multiple models often attain competitive results. Therefore, while models’ overall performance could be similar, they could operate quite differently. This results in an implicit tradeoff in models’ performance throughout the feature space which resolving requires new model selection tools. This paper explores methods for comparing predictive models in an interpretable manner to uncover the tradeoff and help resolve it. To this end, we propose various methods that synthesize ideas from supervised learning, unsupervised learning, dimensionality reduction, and visualization to demonstrate how they can be used to inform model developers about the model selection process. Using various datasets and a simple Python interface, we demonstrate how practitioners and researchers could benefit from applying these approaches to better understand the broader impact of their model selection choices.\",\"PeriodicalId\":54877,\"journal\":{\"name\":\"Journal of Artificial Intelligence Research\",\"volume\":\"291 1\",\"pages\":\"0\"},\"PeriodicalIF\":4.5000,\"publicationDate\":\"2023-03-31\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Artificial Intelligence Research\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1613/jair.1.13764\",\"RegionNum\":3,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Artificial Intelligence Research","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1613/jair.1.13764","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
The recent rise of machine learning (ML) has been leveraged by practitioners and researchers to provide new solutions to an ever growing number of business problems. As with other ML applications, these solutions rely on model selection, which is typically achieved by evaluating certain metrics on models separately and selecting the model whose evaluations (i.e., accuracy-related loss and/or certain interpretability measures) are optimal. However, empirical evidence suggests that, in practice, multiple models often attain competitive results. Therefore, while models’ overall performance could be similar, they could operate quite differently. This results in an implicit tradeoff in models’ performance throughout the feature space which resolving requires new model selection tools. This paper explores methods for comparing predictive models in an interpretable manner to uncover the tradeoff and help resolve it. To this end, we propose various methods that synthesize ideas from supervised learning, unsupervised learning, dimensionality reduction, and visualization to demonstrate how they can be used to inform model developers about the model selection process. Using various datasets and a simple Python interface, we demonstrate how practitioners and researchers could benefit from applying these approaches to better understand the broader impact of their model selection choices.
期刊介绍:
JAIR(ISSN 1076 - 9757) covers all areas of artificial intelligence (AI), publishing refereed research articles, survey articles, and technical notes. Established in 1993 as one of the first electronic scientific journals, JAIR is indexed by INSPEC, Science Citation Index, and MathSciNet. JAIR reviews papers within approximately three months of submission and publishes accepted articles on the internet immediately upon receiving the final versions. JAIR articles are published for free distribution on the internet by the AI Access Foundation, and for purchase in bound volumes by AAAI Press.