Weixin Liang, Nazneen Rajani, Xinyu Yang, Ezinwanne Ozoani, Eric Wu, Yiqun Chen, Daniel Scott Smith, James Zou
{"title":"Systematic analysis of 32,111 AI model cards characterizes documentation practice in AI","authors":"Weixin Liang, Nazneen Rajani, Xinyu Yang, Ezinwanne Ozoani, Eric Wu, Yiqun Chen, Daniel Scott Smith, James Zou","doi":"10.1038/s42256-024-00857-z","DOIUrl":null,"url":null,"abstract":"The rapid proliferation of AI models has underscored the importance of thorough documentation, which enables users to understand, trust and effectively use these models in various applications. Although developers are encouraged to produce model cards, it’s not clear how much or what information these cards contain. In this study we conduct a comprehensive analysis of 32,111 AI model documentations on Hugging Face, a leading platform for distributing and deploying AI models. Our investigation sheds light on the prevailing model card documentation practices. Most AI models with a substantial number of downloads provide model cards, although with uneven informativeness. We find that sections addressing environmental impact, limitations and evaluation exhibit the lowest filled-out rates, whereas the training section is the one most consistently filled-out. We analyse the content of each section to characterize practitioners’ priorities. Interestingly, there are considerable discussions of data, sometimes with equal or even greater emphasis than the model itself. Our study provides a systematic assessment of community norms and practices surroinding model documentation through large-scale data science and linguistic analysis. As the number of AI models has rapidly grown, there is an increased focus on improving the documentation through model cards. Liang et al. explore questions around adoption practices and the type of information provided in model cards through a large-scale analysis of 32,111 model card documentation from 74,970 models.","PeriodicalId":48533,"journal":{"name":"Nature Machine Intelligence","volume":"6 7","pages":"744-753"},"PeriodicalIF":18.8000,"publicationDate":"2024-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Nature Machine Intelligence","FirstCategoryId":"94","ListUrlMain":"https://www.nature.com/articles/s42256-024-00857-z","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
The rapid proliferation of AI models has underscored the importance of thorough documentation, which enables users to understand, trust and effectively use these models in various applications. Although developers are encouraged to produce model cards, it’s not clear how much or what information these cards contain. In this study we conduct a comprehensive analysis of 32,111 AI model documentations on Hugging Face, a leading platform for distributing and deploying AI models. Our investigation sheds light on the prevailing model card documentation practices. Most AI models with a substantial number of downloads provide model cards, although with uneven informativeness. We find that sections addressing environmental impact, limitations and evaluation exhibit the lowest filled-out rates, whereas the training section is the one most consistently filled-out. We analyse the content of each section to characterize practitioners’ priorities. Interestingly, there are considerable discussions of data, sometimes with equal or even greater emphasis than the model itself. Our study provides a systematic assessment of community norms and practices surroinding model documentation through large-scale data science and linguistic analysis. As the number of AI models has rapidly grown, there is an increased focus on improving the documentation through model cards. Liang et al. explore questions around adoption practices and the type of information provided in model cards through a large-scale analysis of 32,111 model card documentation from 74,970 models.
人工智能模型的迅速扩散凸显了详尽文档的重要性,它能让用户理解、信任并在各种应用中有效地使用这些模型。尽管我们鼓励开发者制作模型卡片,但这些卡片包含多少信息或包含哪些信息却并不清楚。在本研究中,我们对 Hugging Face 上的 32111 份人工智能模型文档进行了全面分析,Hugging Face 是分发和部署人工智能模型的领先平台。我们的调查揭示了模型卡片文档的普遍做法。大多数有大量下载的人工智能模型都提供了模型卡,但信息量参差不齐。我们发现,涉及环境影响、局限性和评估的部分填写率最低,而训练部分则是填写率最高的部分。我们分析了每个部分的内容,以确定实践者的优先事项。有趣的是,我们对数据进行了大量讨论,有时对数据的重视程度甚至超过了模型本身。我们的研究通过大规模的数据科学和语言学分析,系统地评估了模型文档之外的社区规范和实践。
期刊介绍:
Nature Machine Intelligence is a distinguished publication that presents original research and reviews on various topics in machine learning, robotics, and AI. Our focus extends beyond these fields, exploring their profound impact on other scientific disciplines, as well as societal and industrial aspects. We recognize limitless possibilities wherein machine intelligence can augment human capabilities and knowledge in domains like scientific exploration, healthcare, medical diagnostics, and the creation of safe and sustainable cities, transportation, and agriculture. Simultaneously, we acknowledge the emergence of ethical, social, and legal concerns due to the rapid pace of advancements.
To foster interdisciplinary discussions on these far-reaching implications, Nature Machine Intelligence serves as a platform for dialogue facilitated through Comments, News Features, News & Views articles, and Correspondence. Our goal is to encourage a comprehensive examination of these subjects.
Similar to all Nature-branded journals, Nature Machine Intelligence operates under the guidance of a team of skilled editors. We adhere to a fair and rigorous peer-review process, ensuring high standards of copy-editing and production, swift publication, and editorial independence.