Ángel Alexander Cabrera, Marco Tulio Ribeiro, Bongshin Lee, R. Deline, Adam Perer, S. Drucker
{"title":"我的AI学到了什么?数据科学家如何理解模型行为","authors":"Ángel Alexander Cabrera, Marco Tulio Ribeiro, Bongshin Lee, R. Deline, Adam Perer, S. Drucker","doi":"10.1145/3542921","DOIUrl":null,"url":null,"abstract":"Data scientists require rich mental models of how AI systems behave to effectively train, debug, and work with them. Despite the prevalence of AI analysis tools, there is no general theory describing how people make sense of what their models have learned. We frame this process as a form of sensemaking and derive a framework describing how data scientists develop mental models of AI behavior. To evaluate the framework, we show how existing AI analysis tools fit into this sensemaking process and use it to design AIFinnity, a system for analyzing image-and-text models. Lastly, we explored how data scientists use a tool developed with the framework through a think-aloud study with 10 data scientists tasked with using AIFinnity to pick an image captioning model. We found that AIFinnity’s sensemaking workflow reflected participants’ mental processes and enabled them to discover and validate diverse AI behaviors.","PeriodicalId":50917,"journal":{"name":"ACM Transactions on Computer-Human Interaction","volume":"30 1","pages":"1 - 27"},"PeriodicalIF":4.8000,"publicationDate":"2023-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"19","resultStr":"{\"title\":\"What Did My AI Learn? How Data Scientists Make Sense of Model Behavior\",\"authors\":\"Ángel Alexander Cabrera, Marco Tulio Ribeiro, Bongshin Lee, R. Deline, Adam Perer, S. Drucker\",\"doi\":\"10.1145/3542921\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Data scientists require rich mental models of how AI systems behave to effectively train, debug, and work with them. Despite the prevalence of AI analysis tools, there is no general theory describing how people make sense of what their models have learned. We frame this process as a form of sensemaking and derive a framework describing how data scientists develop mental models of AI behavior. To evaluate the framework, we show how existing AI analysis tools fit into this sensemaking process and use it to design AIFinnity, a system for analyzing image-and-text models. Lastly, we explored how data scientists use a tool developed with the framework through a think-aloud study with 10 data scientists tasked with using AIFinnity to pick an image captioning model. We found that AIFinnity’s sensemaking workflow reflected participants’ mental processes and enabled them to discover and validate diverse AI behaviors.\",\"PeriodicalId\":50917,\"journal\":{\"name\":\"ACM Transactions on Computer-Human Interaction\",\"volume\":\"30 1\",\"pages\":\"1 - 27\"},\"PeriodicalIF\":4.8000,\"publicationDate\":\"2023-03-07\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"19\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"ACM Transactions on Computer-Human Interaction\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://doi.org/10.1145/3542921\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, CYBERNETICS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"ACM Transactions on Computer-Human Interaction","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1145/3542921","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, CYBERNETICS","Score":null,"Total":0}
What Did My AI Learn? How Data Scientists Make Sense of Model Behavior
Data scientists require rich mental models of how AI systems behave to effectively train, debug, and work with them. Despite the prevalence of AI analysis tools, there is no general theory describing how people make sense of what their models have learned. We frame this process as a form of sensemaking and derive a framework describing how data scientists develop mental models of AI behavior. To evaluate the framework, we show how existing AI analysis tools fit into this sensemaking process and use it to design AIFinnity, a system for analyzing image-and-text models. Lastly, we explored how data scientists use a tool developed with the framework through a think-aloud study with 10 data scientists tasked with using AIFinnity to pick an image captioning model. We found that AIFinnity’s sensemaking workflow reflected participants’ mental processes and enabled them to discover and validate diverse AI behaviors.
期刊介绍:
This ACM Transaction seeks to be the premier archival journal in the multidisciplinary field of human-computer interaction. Since its first issue in March 1994, it has presented work of the highest scientific quality that contributes to the practice in the present and future. The primary emphasis is on results of broad application, but the journal considers original work focused on specific domains, on special requirements, on ethical issues -- the full range of design, development, and use of interactive systems.