{"title":"人工智能会让教师的工作变得更糟吗?","authors":"Autumn A. Arnett","doi":"10.1002/emt.31310","DOIUrl":null,"url":null,"abstract":"<p>Rua Mae Williams, an assistant professor in the User Experience Design program at Purdue University, recently shared on X (the network formerly known as Twitter) the ways implicit bias about who and what intelligence looks and sounds like inspires greater use of artificial intelligence platforms, such as ChatGPT, among students who don’t feel they measure up.</p>","PeriodicalId":100479,"journal":{"name":"Enrollment Management Report","volume":"28 8","pages":"5-7"},"PeriodicalIF":0.0000,"publicationDate":"2024-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Is AI making faculty's jobs worse?\",\"authors\":\"Autumn A. Arnett\",\"doi\":\"10.1002/emt.31310\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>Rua Mae Williams, an assistant professor in the User Experience Design program at Purdue University, recently shared on X (the network formerly known as Twitter) the ways implicit bias about who and what intelligence looks and sounds like inspires greater use of artificial intelligence platforms, such as ChatGPT, among students who don’t feel they measure up.</p>\",\"PeriodicalId\":100479,\"journal\":{\"name\":\"Enrollment Management Report\",\"volume\":\"28 8\",\"pages\":\"5-7\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-10-22\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Enrollment Management Report\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://onlinelibrary.wiley.com/doi/10.1002/emt.31310\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Enrollment Management Report","FirstCategoryId":"1085","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/emt.31310","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
摘要
普渡大学用户体验设计项目助理教授鲁阿-梅-威廉姆斯(Rua Mae Williams)最近在 X(前身为 Twitter 的网络)上分享了关于智能的长相和声音的隐性偏见,这种偏见激发了那些认为自己不合格的学生更多地使用人工智能平台,如 ChatGPT。
Rua Mae Williams, an assistant professor in the User Experience Design program at Purdue University, recently shared on X (the network formerly known as Twitter) the ways implicit bias about who and what intelligence looks and sounds like inspires greater use of artificial intelligence platforms, such as ChatGPT, among students who don’t feel they measure up.