{"title":"Legal AI from a Privacy Point of View: Data Protection and Transparency in Focus","authors":"Cecilia Magnusson Sjöberg","doi":"10.16993/BBK.H","DOIUrl":null,"url":null,"abstract":"A major starting point is that transparency is a condition for privacy in the context of personal data processing, especially when based on artificial intelligence (AI) methods. A major keyword here is openness, which however is not equivalent to transparency. This is explained by the fact that an organization may very well be governed by principles of openness but still not provide transparency due to insufficient access rights and lacking implementation of those rights. Given these hypotheses, the chapter investigates and illuminates ways forward in recognition of algorithms, machine learning, and big data as critical success factors of personal data processing based on AI—that is, if privacy is to be preserved. In these circumstances, autonomy of technology calls for attention and needs to be challenged from a variety of perspectives. Not least, a legal approach to digital human sciences appears to be a resource to examine further. This applies, for instance, when data subjects in the public as well as in the private sphere are exposed to AI for better or for worse. Providing what may be referred to as a legal shield between user and application might be one remedy to shortcomings in this context.","PeriodicalId":332163,"journal":{"name":"Digital Human Sciences: New Objects – New Approaches","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Digital Human Sciences: New Objects – New Approaches","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.16993/BBK.H","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
A major starting point is that transparency is a condition for privacy in the context of personal data processing, especially when based on artificial intelligence (AI) methods. A major keyword here is openness, which however is not equivalent to transparency. This is explained by the fact that an organization may very well be governed by principles of openness but still not provide transparency due to insufficient access rights and lacking implementation of those rights. Given these hypotheses, the chapter investigates and illuminates ways forward in recognition of algorithms, machine learning, and big data as critical success factors of personal data processing based on AI—that is, if privacy is to be preserved. In these circumstances, autonomy of technology calls for attention and needs to be challenged from a variety of perspectives. Not least, a legal approach to digital human sciences appears to be a resource to examine further. This applies, for instance, when data subjects in the public as well as in the private sphere are exposed to AI for better or for worse. Providing what may be referred to as a legal shield between user and application might be one remedy to shortcomings in this context.