Mirka Henninger, Rudolf Debelak, Yannick Rothacher, Carolin Strobl
{"title":"心理学研究的可解释机器学习:机遇与陷阱。","authors":"Mirka Henninger, Rudolf Debelak, Yannick Rothacher, Carolin Strobl","doi":"10.1037/met0000560","DOIUrl":null,"url":null,"abstract":"<p><p>In recent years, machine learning methods have become increasingly popular prediction methods in psychology. At the same time, psychological researchers are typically not only interested in making predictions about the dependent variable, but also in learning which predictor variables are relevant, how they influence the dependent variable, and which predictors interact with each other. However, most machine learning methods are not directly interpretable. Interpretation techniques that support researchers in describing how the machine learning technique came to its prediction may be a means to this end. We present a variety of interpretation techniques and illustrate the opportunities they provide for interpreting the results of two widely used black box machine learning methods that serve as our examples: random forests and neural networks. At the same time, we illustrate potential pitfalls and risks of misinterpretation that may occur in certain data settings. We show in which way correlated predictors impact interpretations with regard to the relevance or shape of predictor effects and in which situations interaction effects may or may not be detected. We use simulated didactic examples throughout the article, as well as an empirical data set for illustrating an approach to objectify the interpretation of visualizations. We conclude that, when critically reflected, interpretable machine learning techniques may provide useful tools when describing complex psychological relationships. (PsycInfo Database Record (c) 2023 APA, all rights reserved).</p>","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":" ","pages":""},"PeriodicalIF":7.6000,"publicationDate":"2023-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":"{\"title\":\"Interpretable machine learning for psychological research: Opportunities and pitfalls.\",\"authors\":\"Mirka Henninger, Rudolf Debelak, Yannick Rothacher, Carolin Strobl\",\"doi\":\"10.1037/met0000560\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>In recent years, machine learning methods have become increasingly popular prediction methods in psychology. At the same time, psychological researchers are typically not only interested in making predictions about the dependent variable, but also in learning which predictor variables are relevant, how they influence the dependent variable, and which predictors interact with each other. However, most machine learning methods are not directly interpretable. Interpretation techniques that support researchers in describing how the machine learning technique came to its prediction may be a means to this end. We present a variety of interpretation techniques and illustrate the opportunities they provide for interpreting the results of two widely used black box machine learning methods that serve as our examples: random forests and neural networks. At the same time, we illustrate potential pitfalls and risks of misinterpretation that may occur in certain data settings. We show in which way correlated predictors impact interpretations with regard to the relevance or shape of predictor effects and in which situations interaction effects may or may not be detected. We use simulated didactic examples throughout the article, as well as an empirical data set for illustrating an approach to objectify the interpretation of visualizations. We conclude that, when critically reflected, interpretable machine learning techniques may provide useful tools when describing complex psychological relationships. (PsycInfo Database Record (c) 2023 APA, all rights reserved).</p>\",\"PeriodicalId\":20782,\"journal\":{\"name\":\"Psychological methods\",\"volume\":\" \",\"pages\":\"\"},\"PeriodicalIF\":7.6000,\"publicationDate\":\"2023-05-25\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"3\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Psychological methods\",\"FirstCategoryId\":\"102\",\"ListUrlMain\":\"https://doi.org/10.1037/met0000560\",\"RegionNum\":1,\"RegionCategory\":\"心理学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"PSYCHOLOGY, MULTIDISCIPLINARY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Psychological methods","FirstCategoryId":"102","ListUrlMain":"https://doi.org/10.1037/met0000560","RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"PSYCHOLOGY, MULTIDISCIPLINARY","Score":null,"Total":0}
Interpretable machine learning for psychological research: Opportunities and pitfalls.
In recent years, machine learning methods have become increasingly popular prediction methods in psychology. At the same time, psychological researchers are typically not only interested in making predictions about the dependent variable, but also in learning which predictor variables are relevant, how they influence the dependent variable, and which predictors interact with each other. However, most machine learning methods are not directly interpretable. Interpretation techniques that support researchers in describing how the machine learning technique came to its prediction may be a means to this end. We present a variety of interpretation techniques and illustrate the opportunities they provide for interpreting the results of two widely used black box machine learning methods that serve as our examples: random forests and neural networks. At the same time, we illustrate potential pitfalls and risks of misinterpretation that may occur in certain data settings. We show in which way correlated predictors impact interpretations with regard to the relevance or shape of predictor effects and in which situations interaction effects may or may not be detected. We use simulated didactic examples throughout the article, as well as an empirical data set for illustrating an approach to objectify the interpretation of visualizations. We conclude that, when critically reflected, interpretable machine learning techniques may provide useful tools when describing complex psychological relationships. (PsycInfo Database Record (c) 2023 APA, all rights reserved).
期刊介绍:
Psychological Methods is devoted to the development and dissemination of methods for collecting, analyzing, understanding, and interpreting psychological data. Its purpose is the dissemination of innovations in research design, measurement, methodology, and quantitative and qualitative analysis to the psychological community; its further purpose is to promote effective communication about related substantive and methodological issues. The audience is expected to be diverse and to include those who develop new procedures, those who are responsible for undergraduate and graduate training in design, measurement, and statistics, as well as those who employ those procedures in research.