{"title":"Bugbears or legitimate threats?: (social) scientists' criticisms of machine learning?","authors":"S. Mullainathan","doi":"10.1145/2623330.2630818","DOIUrl":null,"url":null,"abstract":"Social scientists increasingly criticize the use of machine learning techniques to understand human behavior. Criticisms include: (1) They are atheoretical and hence of limited scientific value; (2) They do not address causality and are hence of limited policy value; and (3) They are uninterpretable and hence of limited generalizability value (outside contexts very narrowly similar to the training dataset). These criticisms, I argue, miss the enormous opportunity offered by ML techniques to fundamentally improve the practice of empirical social science. Yet each criticism does contain a grain of truth and overcoming them will require innovations to existing methodologies. Some of these innovations are being developed today and some are yet to be tackled. I will in this talk sketch (1) what these innovations look like or should look like; (2) why they are needed; and (3) the technical challenges they raise. I will illustrate my points using a set of applications that range from financial markets to social policy problems to computational models of basic psychological processes. This talk describes joint work with Jon Kleinberg and individual projects with Himabindu Lakkaraju, Jure Leskovec, Jens Ludwig, Anuj Shah, Chenhao Tan, Mike Yeomans and Tom Zimmerman.","PeriodicalId":20536,"journal":{"name":"Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining","volume":"71 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2014-08-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/2623330.2630818","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2
Abstract
Social scientists increasingly criticize the use of machine learning techniques to understand human behavior. Criticisms include: (1) They are atheoretical and hence of limited scientific value; (2) They do not address causality and are hence of limited policy value; and (3) They are uninterpretable and hence of limited generalizability value (outside contexts very narrowly similar to the training dataset). These criticisms, I argue, miss the enormous opportunity offered by ML techniques to fundamentally improve the practice of empirical social science. Yet each criticism does contain a grain of truth and overcoming them will require innovations to existing methodologies. Some of these innovations are being developed today and some are yet to be tackled. I will in this talk sketch (1) what these innovations look like or should look like; (2) why they are needed; and (3) the technical challenges they raise. I will illustrate my points using a set of applications that range from financial markets to social policy problems to computational models of basic psychological processes. This talk describes joint work with Jon Kleinberg and individual projects with Himabindu Lakkaraju, Jure Leskovec, Jens Ludwig, Anuj Shah, Chenhao Tan, Mike Yeomans and Tom Zimmerman.