{"title":"舞台设置:走向合理的图像推理原则","authors":"Severin Engelmann, Jens Grossklags","doi":"10.1145/3314183.3323846","DOIUrl":null,"url":null,"abstract":"User modeling has become an indispensable feature of a plethora of different digital services such as search engines, social media or e-commerce. Indeed, decision procedures of online algorithmic systems apply various methods including machine learning (ML) to generate virtual models of billions of human beings based on large amounts of personal and other data. Recently, there has been a call for a \"Right to Reasonable Inferences\" for Europe's General Data Protection Regulation (GDPR). Here, we explore a conceptualization of reasonable inference in the context of image analytics that refers to the notion of evidence in theoretical reasoning. The main goal of this paper is to start defining principles for reasonable image inferences, in particular, portraits of individuals. Based on an image analytics case study, we use the notions of first- and second-order inferences to determine the reasonableness of predicted concepts. Finally, we highlight three key challenges for the future of this research space: first, we argue for the potential value of hidden quasi-semantics. Second, we indicate that automatic inferences can create a fundamental trade-off between privacy preservation and \"model fit\" and, third, we end with the question whether human reasoning can serve as a normative benchmark for reasonable automatic inferences.","PeriodicalId":240482,"journal":{"name":"Adjunct Publication of the 27th Conference on User Modeling, Adaptation and Personalization","volume":"54 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":"{\"title\":\"Setting the Stage: Towards Principles for Reasonable Image Inferences\",\"authors\":\"Severin Engelmann, Jens Grossklags\",\"doi\":\"10.1145/3314183.3323846\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"User modeling has become an indispensable feature of a plethora of different digital services such as search engines, social media or e-commerce. Indeed, decision procedures of online algorithmic systems apply various methods including machine learning (ML) to generate virtual models of billions of human beings based on large amounts of personal and other data. Recently, there has been a call for a \\\"Right to Reasonable Inferences\\\" for Europe's General Data Protection Regulation (GDPR). Here, we explore a conceptualization of reasonable inference in the context of image analytics that refers to the notion of evidence in theoretical reasoning. The main goal of this paper is to start defining principles for reasonable image inferences, in particular, portraits of individuals. Based on an image analytics case study, we use the notions of first- and second-order inferences to determine the reasonableness of predicted concepts. Finally, we highlight three key challenges for the future of this research space: first, we argue for the potential value of hidden quasi-semantics. Second, we indicate that automatic inferences can create a fundamental trade-off between privacy preservation and \\\"model fit\\\" and, third, we end with the question whether human reasoning can serve as a normative benchmark for reasonable automatic inferences.\",\"PeriodicalId\":240482,\"journal\":{\"name\":\"Adjunct Publication of the 27th Conference on User Modeling, Adaptation and Personalization\",\"volume\":\"54 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2019-06-06\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"4\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Adjunct Publication of the 27th Conference on User Modeling, Adaptation and Personalization\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3314183.3323846\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Adjunct Publication of the 27th Conference on User Modeling, Adaptation and Personalization","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3314183.3323846","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Setting the Stage: Towards Principles for Reasonable Image Inferences
User modeling has become an indispensable feature of a plethora of different digital services such as search engines, social media or e-commerce. Indeed, decision procedures of online algorithmic systems apply various methods including machine learning (ML) to generate virtual models of billions of human beings based on large amounts of personal and other data. Recently, there has been a call for a "Right to Reasonable Inferences" for Europe's General Data Protection Regulation (GDPR). Here, we explore a conceptualization of reasonable inference in the context of image analytics that refers to the notion of evidence in theoretical reasoning. The main goal of this paper is to start defining principles for reasonable image inferences, in particular, portraits of individuals. Based on an image analytics case study, we use the notions of first- and second-order inferences to determine the reasonableness of predicted concepts. Finally, we highlight three key challenges for the future of this research space: first, we argue for the potential value of hidden quasi-semantics. Second, we indicate that automatic inferences can create a fundamental trade-off between privacy preservation and "model fit" and, third, we end with the question whether human reasoning can serve as a normative benchmark for reasonable automatic inferences.