{"title":"人工智能与数据保护的权利","authors":"Ralf Poscher","doi":"10.2139/SSRN.3769159","DOIUrl":null,"url":null,"abstract":"One way in which the law is often related to new technological developments is as an external restriction. Lawyers are frequently asked whether a new technology is compatible with the law. This implies an asymmetry between technology and the law. Technology appears dynamic, the law stable. We know, however, that this image of the relationship between technology and the law is skewed. The right to data protection itself is an innovative reaction to the law from the early days of mass computing and automated data processing. The paper explores how an essential aspect of AI-technologies, their lack of transparency, might support a different understanding of the right to data protection. From this different perspective, the right to data protection is not regarded as a fundamental right of its own but rather as a doctrinal enhancement of each fundamental right against the abstract dangers of digital data collection and processing. This understanding of the right to data protection shifts the perspective from the individual data processing operation to the data processing system and the abstract dangers connected with it. The systems would not be measured by how they can avoid or justify the processing of some personal data but by the effectiveness of the mechanisms employed to avert the abstract dangers associated with a specific system. This shift in perspective should also allow an assessment of AI-systems despite their lack of transparency.","PeriodicalId":306343,"journal":{"name":"The Cambridge Handbook of Responsible Artificial Intelligence","volume":"48 10 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-01-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Artificial Intelligence and the Right to Data Protection\",\"authors\":\"Ralf Poscher\",\"doi\":\"10.2139/SSRN.3769159\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"One way in which the law is often related to new technological developments is as an external restriction. Lawyers are frequently asked whether a new technology is compatible with the law. This implies an asymmetry between technology and the law. Technology appears dynamic, the law stable. We know, however, that this image of the relationship between technology and the law is skewed. The right to data protection itself is an innovative reaction to the law from the early days of mass computing and automated data processing. The paper explores how an essential aspect of AI-technologies, their lack of transparency, might support a different understanding of the right to data protection. From this different perspective, the right to data protection is not regarded as a fundamental right of its own but rather as a doctrinal enhancement of each fundamental right against the abstract dangers of digital data collection and processing. This understanding of the right to data protection shifts the perspective from the individual data processing operation to the data processing system and the abstract dangers connected with it. The systems would not be measured by how they can avoid or justify the processing of some personal data but by the effectiveness of the mechanisms employed to avert the abstract dangers associated with a specific system. This shift in perspective should also allow an assessment of AI-systems despite their lack of transparency.\",\"PeriodicalId\":306343,\"journal\":{\"name\":\"The Cambridge Handbook of Responsible Artificial Intelligence\",\"volume\":\"48 10 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-01-19\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"The Cambridge Handbook of Responsible Artificial Intelligence\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.2139/SSRN.3769159\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"The Cambridge Handbook of Responsible Artificial Intelligence","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.2139/SSRN.3769159","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Artificial Intelligence and the Right to Data Protection
One way in which the law is often related to new technological developments is as an external restriction. Lawyers are frequently asked whether a new technology is compatible with the law. This implies an asymmetry between technology and the law. Technology appears dynamic, the law stable. We know, however, that this image of the relationship between technology and the law is skewed. The right to data protection itself is an innovative reaction to the law from the early days of mass computing and automated data processing. The paper explores how an essential aspect of AI-technologies, their lack of transparency, might support a different understanding of the right to data protection. From this different perspective, the right to data protection is not regarded as a fundamental right of its own but rather as a doctrinal enhancement of each fundamental right against the abstract dangers of digital data collection and processing. This understanding of the right to data protection shifts the perspective from the individual data processing operation to the data processing system and the abstract dangers connected with it. The systems would not be measured by how they can avoid or justify the processing of some personal data but by the effectiveness of the mechanisms employed to avert the abstract dangers associated with a specific system. This shift in perspective should also allow an assessment of AI-systems despite their lack of transparency.