{"title":"On Serving Two Masters: Directing Critical Technical Practice towards Human-Compatibility in AI","authors":"McKane Andrus","doi":"10.1145/3306618.3314325","DOIUrl":null,"url":null,"abstract":"In this project I have worked towards a method for critical, socially aligned research in Artificial Intelligence by merging the analysis of conceptual commitments in technical work, discourse analysis, and critical technical practice. While the goal of critical technical practice as proposed by [1] is to overcome technical impasses, I explore an alternative use case - ensuring that technical research is aligned with social values. In the design of AI systems, we generally start with a technical formulation of a problem and then attempt to build a system that addresses that problem. Critical technical practice tells us that this technical formulation is always founded upon the discipline's core discourse and ontology, and that difficulty in solving a technical problem might just result from inconsistencies and faults in those core attributes. What I hope to show with this project is that, even when a technical problem seems solvable, critical technical practice can and should be used to ensure the human-compatibility of the technical research.","PeriodicalId":418125,"journal":{"name":"Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society","volume":"83 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3306618.3314325","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
In this project I have worked towards a method for critical, socially aligned research in Artificial Intelligence by merging the analysis of conceptual commitments in technical work, discourse analysis, and critical technical practice. While the goal of critical technical practice as proposed by [1] is to overcome technical impasses, I explore an alternative use case - ensuring that technical research is aligned with social values. In the design of AI systems, we generally start with a technical formulation of a problem and then attempt to build a system that addresses that problem. Critical technical practice tells us that this technical formulation is always founded upon the discipline's core discourse and ontology, and that difficulty in solving a technical problem might just result from inconsistencies and faults in those core attributes. What I hope to show with this project is that, even when a technical problem seems solvable, critical technical practice can and should be used to ensure the human-compatibility of the technical research.