{"title":"负责任的数据科学教学:开辟新的教学领域。","authors":"Armanda Lewis, Julia Stoyanovich","doi":"10.1007/s40593-021-00241-7","DOIUrl":null,"url":null,"abstract":"<p><p>Although an increasing number of ethical data science and AI courses is available, with many focusing specifically on technology and computer ethics, pedagogical approaches employed in these courses rely exclusively on texts rather than on algorithmic development or data analysis. In this paper we recount a recent experience in developing and teaching a technical course focused on responsible data science, which tackles the issues of ethics in AI, legal compliance, data quality, algorithmic fairness and diversity, transparency of data and algorithms, privacy, and data protection. Interpretability of machine-assisted decision-making is an important component of responsible data science that gives a good lens through which to see other responsible data science topics, including privacy and fairness. We provide emerging pedagogical best practices for teaching technical data science and AI courses that focus on interpretability, and tie responsible data science to current learning science and learning analytics research. We focus on a novel methodological notion of the <i>object-to-interpret-with</i>, a representation that helps students target metacognition involving interpretation and representation. In the context of interpreting machine learning models, we highlight the suitability of \"nutritional labels\"-a family of interpretability tools that are gaining popularity in responsible data science research and practice.</p>","PeriodicalId":46637,"journal":{"name":"International Journal of Artificial Intelligence in Education","volume":"32 3","pages":"783-807"},"PeriodicalIF":4.7000,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8049623/pdf/","citationCount":"0","resultStr":"{\"title\":\"Teaching Responsible Data Science: Charting New Pedagogical Territory.\",\"authors\":\"Armanda Lewis, Julia Stoyanovich\",\"doi\":\"10.1007/s40593-021-00241-7\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>Although an increasing number of ethical data science and AI courses is available, with many focusing specifically on technology and computer ethics, pedagogical approaches employed in these courses rely exclusively on texts rather than on algorithmic development or data analysis. In this paper we recount a recent experience in developing and teaching a technical course focused on responsible data science, which tackles the issues of ethics in AI, legal compliance, data quality, algorithmic fairness and diversity, transparency of data and algorithms, privacy, and data protection. Interpretability of machine-assisted decision-making is an important component of responsible data science that gives a good lens through which to see other responsible data science topics, including privacy and fairness. We provide emerging pedagogical best practices for teaching technical data science and AI courses that focus on interpretability, and tie responsible data science to current learning science and learning analytics research. We focus on a novel methodological notion of the <i>object-to-interpret-with</i>, a representation that helps students target metacognition involving interpretation and representation. In the context of interpreting machine learning models, we highlight the suitability of \\\"nutritional labels\\\"-a family of interpretability tools that are gaining popularity in responsible data science research and practice.</p>\",\"PeriodicalId\":46637,\"journal\":{\"name\":\"International Journal of Artificial Intelligence in Education\",\"volume\":\"32 3\",\"pages\":\"783-807\"},\"PeriodicalIF\":4.7000,\"publicationDate\":\"2022-01-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8049623/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"International Journal of Artificial Intelligence in Education\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1007/s40593-021-00241-7\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2021/4/15 0:00:00\",\"PubModel\":\"Epub\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Artificial Intelligence in Education","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1007/s40593-021-00241-7","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2021/4/15 0:00:00","PubModel":"Epub","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS","Score":null,"Total":0}
Teaching Responsible Data Science: Charting New Pedagogical Territory.
Although an increasing number of ethical data science and AI courses is available, with many focusing specifically on technology and computer ethics, pedagogical approaches employed in these courses rely exclusively on texts rather than on algorithmic development or data analysis. In this paper we recount a recent experience in developing and teaching a technical course focused on responsible data science, which tackles the issues of ethics in AI, legal compliance, data quality, algorithmic fairness and diversity, transparency of data and algorithms, privacy, and data protection. Interpretability of machine-assisted decision-making is an important component of responsible data science that gives a good lens through which to see other responsible data science topics, including privacy and fairness. We provide emerging pedagogical best practices for teaching technical data science and AI courses that focus on interpretability, and tie responsible data science to current learning science and learning analytics research. We focus on a novel methodological notion of the object-to-interpret-with, a representation that helps students target metacognition involving interpretation and representation. In the context of interpreting machine learning models, we highlight the suitability of "nutritional labels"-a family of interpretability tools that are gaining popularity in responsible data science research and practice.
期刊介绍:
IJAIED publishes papers concerned with the application of AI to education. It aims to help the development of principles for the design of computer-based learning systems. Its premise is that such principles involve the modelling and representation of relevant aspects of knowledge, before implementation or during execution, and hence require the application of AI techniques and concepts. IJAIED has a very broad notion of the scope of AI and of a ''computer-based learning system'', as indicated by the following list of topics considered to be within the scope of IJAIED: adaptive and intelligent multimedia and hypermedia systemsagent-based learning environmentsAIED and teacher educationarchitectures for AIED systemsassessment and testing of learning outcomesauthoring systems and shells for AIED systemsbayesian and statistical methodscase-based systemscognitive developmentcognitive models of problem-solvingcognitive tools for learningcomputer-assisted language learningcomputer-supported collaborative learningdialogue (argumentation, explanation, negotiation, etc.) discovery environments and microworldsdistributed learning environmentseducational roboticsembedded training systemsempirical studies to inform the design of learning environmentsenvironments to support the learning of programmingevaluation of AIED systemsformal models of components of AIED systemshelp and advice systemshuman factors and interface designinstructional design principlesinstructional planningintelligent agents on the internetintelligent courseware for computer-based trainingintelligent tutoring systemsknowledge and skill acquisitionknowledge representation for instructionmodelling metacognitive skillsmodelling pedagogical interactionsmotivationnatural language interfaces for instructional systemsnetworked learning and teaching systemsneural models applied to AIED systemsperformance support systemspractical, real-world applications of AIED systemsqualitative reasoning in simulationssituated learning and cognitive apprenticeshipsocial and cultural aspects of learningstudent modelling and cognitive diagnosissupport for knowledge building communitiessupport for networked communicationtheories of learning and conceptual changetools for administration and curriculum integrationtools for the guided exploration of information resources