{"title":"Towards Accountability in Machine Learning Applications: A System-testing Approach","authors":"Wayne Xinwei Wan, Thies Lindenthal","doi":"10.2139/ssrn.3758451","DOIUrl":null,"url":null,"abstract":"A rapidly expanding universe of technology-focused startups is trying to change and improve the way real estate markets operate. The undisputed predictive power of machine learning (ML) models often plays a crucial role in the 'disruption' of traditional processes. However, an accountability gap prevails: How do the models arrive at their predictions? Do they do what we hope they do – or are corners cut?<br><br>Training ML models is a software development process at heart. We suggest following the dedicated software testing framework and verifying that the ML model is performing as intended. Illustratively, we augment two image classifiers with a system testing procedure based on local interpretable model-agnostic explanation (LIME) techniques. Analyzing the classifications sheds light on some of the factors that determine the behavior of the systems. We show that cross-validation is simply not good enough when operating in regulated environments.","PeriodicalId":21047,"journal":{"name":"Real Estate eJournal","volume":"42 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Real Estate eJournal","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.2139/ssrn.3758451","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3
Abstract
A rapidly expanding universe of technology-focused startups is trying to change and improve the way real estate markets operate. The undisputed predictive power of machine learning (ML) models often plays a crucial role in the 'disruption' of traditional processes. However, an accountability gap prevails: How do the models arrive at their predictions? Do they do what we hope they do – or are corners cut?
Training ML models is a software development process at heart. We suggest following the dedicated software testing framework and verifying that the ML model is performing as intended. Illustratively, we augment two image classifiers with a system testing procedure based on local interpretable model-agnostic explanation (LIME) techniques. Analyzing the classifications sheds light on some of the factors that determine the behavior of the systems. We show that cross-validation is simply not good enough when operating in regulated environments.