Pub Date : 2018-08-01DOI: 10.1109/PST.2018.8514161
Navoda Senavirathne, V. Torra
Most of the privacy-preserving techniques suffer from an inevitable utility loss due to different perturbations carried out on the input data or the models in order to gain privacy. When it comes to machine learning (ML) based prediction models, accuracy is the key criterion for model selection. Thus, an accuracy loss due to privacy implementations is undesirable.The motivation of this work, is to implement the privacy model “integral privacy” and to evaluate its eligibility as a technique for machine learning model selection while preserving model utility. In this paper, a linear regression approximation method is implemented based on integral privacy which ensures high accuracy and robustness while maintaining a degree of privacy for ML models. The proposed method uses a re-sampling based estimator to construct linear regression model which is coupled with a rounding based data discretization method to support integral privacy principles. The implementation is evaluated in comparison with differential privacy in terms of privacy, accuracy and robustness of the output ML models. In comparison, integral privacy based solution provides a better solution with respect to the above criteria.
{"title":"Approximating Robust Linear Regression With An Integral Privacy Guarantee","authors":"Navoda Senavirathne, V. Torra","doi":"10.1109/PST.2018.8514161","DOIUrl":"https://doi.org/10.1109/PST.2018.8514161","url":null,"abstract":"Most of the privacy-preserving techniques suffer from an inevitable utility loss due to different perturbations carried out on the input data or the models in order to gain privacy. When it comes to machine learning (ML) based prediction models, accuracy is the key criterion for model selection. Thus, an accuracy loss due to privacy implementations is undesirable.The motivation of this work, is to implement the privacy model “integral privacy” and to evaluate its eligibility as a technique for machine learning model selection while preserving model utility. In this paper, a linear regression approximation method is implemented based on integral privacy which ensures high accuracy and robustness while maintaining a degree of privacy for ML models. The proposed method uses a re-sampling based estimator to construct linear regression model which is coupled with a rounding based data discretization method to support integral privacy principles. The implementation is evaluated in comparison with differential privacy in terms of privacy, accuracy and robustness of the output ML models. In comparison, integral privacy based solution provides a better solution with respect to the above criteria.","PeriodicalId":265506,"journal":{"name":"2018 16th Annual Conference on Privacy, Security and Trust (PST)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130244313","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-08-01DOI: 10.1109/PST.2018.8514195
João S. Resende, Rolando Martins, L. Antunes
Cloud storage allows users to remotely store their data, giving access anywhere and to anyone with an Internet connection. The accessibility, lack of local data maintenance and absence of local storage hardware are the main advantages of this type of storage. The adoption of this type of storage is being driven by its accessibility. However, one of the main barriers to its widespread adoption is the sovereignty issues originated by lack of trust in storing private and sensitive information in such a medium. Recent attacks to cloud-based storage show that current solutions do not provide adequate levels of security and subsequently fail to protect users' privacy. Usually, users rely solely on the security supplied by the storage providers, which in the presence of a security breach will ultimate lead to data leakage. In this paper, we propose and implement a broker (ARGUS) that acts as a proxy to the existing public cloud infrastructures by performing all the necessary authentication, cryptography and erasure coding. ARGUS uses erasure code as a way to provide efficient redundancy (opposite to standard replication) while adding an extra layer to data protection in which data is broken into fragments, expanded and encoded with redundant data pieces that are stored across a set of different storage providers (public or private). The key characteristics of ARGUS are confidentiality, integrity and availability of data stored in public cloud systems.
{"title":"Enforcing Privacy and Security in Public Cloud Storage","authors":"João S. Resende, Rolando Martins, L. Antunes","doi":"10.1109/PST.2018.8514195","DOIUrl":"https://doi.org/10.1109/PST.2018.8514195","url":null,"abstract":"Cloud storage allows users to remotely store their data, giving access anywhere and to anyone with an Internet connection. The accessibility, lack of local data maintenance and absence of local storage hardware are the main advantages of this type of storage. The adoption of this type of storage is being driven by its accessibility. However, one of the main barriers to its widespread adoption is the sovereignty issues originated by lack of trust in storing private and sensitive information in such a medium. Recent attacks to cloud-based storage show that current solutions do not provide adequate levels of security and subsequently fail to protect users' privacy. Usually, users rely solely on the security supplied by the storage providers, which in the presence of a security breach will ultimate lead to data leakage. In this paper, we propose and implement a broker (ARGUS) that acts as a proxy to the existing public cloud infrastructures by performing all the necessary authentication, cryptography and erasure coding. ARGUS uses erasure code as a way to provide efficient redundancy (opposite to standard replication) while adding an extra layer to data protection in which data is broken into fragments, expanded and encoded with redundant data pieces that are stored across a set of different storage providers (public or private). The key characteristics of ARGUS are confidentiality, integrity and availability of data stored in public cloud systems.","PeriodicalId":265506,"journal":{"name":"2018 16th Annual Conference on Privacy, Security and Trust (PST)","volume":"237 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134516005","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-08-01DOI: 10.1109/PST.2018.8514156
Soheil Varastehpour, H. Sharifzadeh, Iman Tabatabaei Ardekani, A. Sarrafzadeh
Authentication methods based on some human traits, including fingerprint, face, iris, and palmprint, have been developed significantly, and currently, they are mature enough which have been reliably considered for person identification purposes. Recently, as a new research area, few methods based on non-facial skin features such as vein patterns have been developed. This extended abstract briefly explores some key features of biometric traits whereas vein pattern recognition is also outlined.
{"title":"Extended Abstract: A Review of Biometric Traits with Insight into Vein Pattern Recognition","authors":"Soheil Varastehpour, H. Sharifzadeh, Iman Tabatabaei Ardekani, A. Sarrafzadeh","doi":"10.1109/PST.2018.8514156","DOIUrl":"https://doi.org/10.1109/PST.2018.8514156","url":null,"abstract":"Authentication methods based on some human traits, including fingerprint, face, iris, and palmprint, have been developed significantly, and currently, they are mature enough which have been reliably considered for person identification purposes. Recently, as a new research area, few methods based on non-facial skin features such as vein patterns have been developed. This extended abstract briefly explores some key features of biometric traits whereas vein pattern recognition is also outlined.","PeriodicalId":265506,"journal":{"name":"2018 16th Annual Conference on Privacy, Security and Trust (PST)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125893880","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-08-01DOI: 10.1109/PST.2018.8514164
Lanlan Pan, Xin Zhang, Anlei Hu, Xuebiao Yuchi, Jian Wang
Many authoritative servers today return different responses based on the perceived geographical location of the resolvers' IP addresses, to bring the content as close to the users as possible. RFC7871 proposes an EDNS Client Subnet (ECS) extension to carry part of the client's IP address in the DNS packets for authoritative server. Compared with the resolver's IP address in the DNS packets, ECS can help the authoritative server to guess the user's geographical location more precisely. However, ECS raises some privacy concerns since it leaks client's subnet information on the resolution path to the authoritative server. In order to find a right balance between privacy improvement and end-user experience optimization, in this paper we introduce an EDNS ISP Location (EIL) extension to address the client subnet leakage problem of ECS. Note that EIL can reduce the dependence on high quality IP geolocation database, while this is crucial to ensure DNS response's accuracy in ECS.
{"title":"Mitigating Client Subnet Leakage in DNS Queries","authors":"Lanlan Pan, Xin Zhang, Anlei Hu, Xuebiao Yuchi, Jian Wang","doi":"10.1109/PST.2018.8514164","DOIUrl":"https://doi.org/10.1109/PST.2018.8514164","url":null,"abstract":"Many authoritative servers today return different responses based on the perceived geographical location of the resolvers' IP addresses, to bring the content as close to the users as possible. RFC7871 proposes an EDNS Client Subnet (ECS) extension to carry part of the client's IP address in the DNS packets for authoritative server. Compared with the resolver's IP address in the DNS packets, ECS can help the authoritative server to guess the user's geographical location more precisely. However, ECS raises some privacy concerns since it leaks client's subnet information on the resolution path to the authoritative server. In order to find a right balance between privacy improvement and end-user experience optimization, in this paper we introduce an EDNS ISP Location (EIL) extension to address the client subnet leakage problem of ECS. Note that EIL can reduce the dependence on high quality IP geolocation database, while this is crucial to ensure DNS response's accuracy in ECS.","PeriodicalId":265506,"journal":{"name":"2018 16th Annual Conference on Privacy, Security and Trust (PST)","volume":"69 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124039519","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-08-01DOI: 10.1109/PST.2018.8514194
Hazel Murray, David Malone
Leaks from password datasets are a regular occur-rence. An organization may defend a leak with reassurances that just a small subset of passwords were taken. In this paper we show that the leak of a relatively small number of text-based passwords from an organizations' stored dataset can lead to a further large collection of users being compromised. Taking a sample of passwords from a given dataset of passwords we exploit the knowledge we gain of the distribution to guess other samples from the same dataset. We show theoretically and empirically that the distribution of passwords in the sample follows the same distribution as the passwords in the whole dataset. We propose a function that measures the ability of one distribution to estimate another. Leveraging this we show that a sample of passwords leaked from a given dataset, will compromise the remaining passwords in that dataset better than a sample leaked from another source.
{"title":"Exploring the Impact of Password Dataset Distribution on Guessing","authors":"Hazel Murray, David Malone","doi":"10.1109/PST.2018.8514194","DOIUrl":"https://doi.org/10.1109/PST.2018.8514194","url":null,"abstract":"Leaks from password datasets are a regular occur-rence. An organization may defend a leak with reassurances that just a small subset of passwords were taken. In this paper we show that the leak of a relatively small number of text-based passwords from an organizations' stored dataset can lead to a further large collection of users being compromised. Taking a sample of passwords from a given dataset of passwords we exploit the knowledge we gain of the distribution to guess other samples from the same dataset. We show theoretically and empirically that the distribution of passwords in the sample follows the same distribution as the passwords in the whole dataset. We propose a function that measures the ability of one distribution to estimate another. Leveraging this we show that a sample of passwords leaked from a given dataset, will compromise the remaining passwords in that dataset better than a sample leaked from another source.","PeriodicalId":265506,"journal":{"name":"2018 16th Annual Conference on Privacy, Security and Trust (PST)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123503442","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-08-01DOI: 10.1109/PST.2018.8514214
Robin Ankele, A. Simpson
Previous contributions have established a framework of privacy games that supports the representation of syntactic privacy notions such as anonymity, unlinkability, pseudonymity and unobservablility in the form of games. The intention is that, via such abstractions, the understanding of, and relationships between, privacy notions can be clarified. Further, an unambiguous understanding of adversarial actions is given. Yet, without any practical context, the potential benefits of these notions and games may be incomprehensible to system designers and software developers. We utilise these games in a case study based on recommender systems. Consequently, we show that the game-based definitions have the potential to interconnect privacy implications and can be utilised to reason about privacy.
{"title":"Analysis and Evaluation of Syntactic Privacy Notions and Games","authors":"Robin Ankele, A. Simpson","doi":"10.1109/PST.2018.8514214","DOIUrl":"https://doi.org/10.1109/PST.2018.8514214","url":null,"abstract":"Previous contributions have established a framework of privacy games that supports the representation of syntactic privacy notions such as anonymity, unlinkability, pseudonymity and unobservablility in the form of games. The intention is that, via such abstractions, the understanding of, and relationships between, privacy notions can be clarified. Further, an unambiguous understanding of adversarial actions is given. Yet, without any practical context, the potential benefits of these notions and games may be incomprehensible to system designers and software developers. We utilise these games in a case study based on recommender systems. Consequently, we show that the game-based definitions have the potential to interconnect privacy implications and can be utilised to reason about privacy.","PeriodicalId":265506,"journal":{"name":"2018 16th Annual Conference on Privacy, Security and Trust (PST)","volume":"110 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115694488","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-08-01DOI: 10.1109/PST.2018.8514202
S. De, Abdessamad Imine
Attributes such as interests, workplace and relationship status in an Online Social Network (OSN) profile introduce a user to other OSN users. They can contribute to building new friendships as well as reviving and enhancing existing ones. However, the personal data revealed by the user himself or by his vicinity, i.e., his OSN friends, can also make him vulnerable to many privacy harms such as identity theft, stalking or sexual predation. So users have to carefully select the privacy settings for their profile attributes by keeping in mind the trade-off between privacy and social benefit. In this paper, we propose a usercentric two-phase approach, based on Integer Programming, to choose the right privacy settings. Our model assists the user to understand which privacy harms he can avoid, after tolerating residual risks, given his desired social benefit requirements and suggests the privacy settings he should adopt to achieve the maximum social benefit. Thus, users’ choices are based on both privacy risks and benefits, a view supported by the EU General Data Protection Regulation (GDPR). We have tested our approach on user profiles with varying vicinities and social benefit requirements.
OSN (Online Social Network)配置文件中的兴趣、工作单位、关系状态等属性,是将一个用户介绍给其他OSN用户的信息。他们可以帮助建立新的友谊,也可以恢复和加强现有的友谊。然而,用户自己或其附近,即他的OSN朋友透露的个人数据也可能使他容易受到身份盗窃、跟踪或性侵犯等许多隐私伤害。因此,用户必须仔细选择他们的个人资料属性的隐私设置,记住隐私和社会利益之间的权衡。在本文中,我们提出了一种基于整数规划的以用户为中心的两阶段方法来选择正确的隐私设置。我们的模型帮助用户理解在容忍剩余风险后,在其期望的社会效益要求下,他可以避免哪些隐私伤害,并建议他应该采用哪些隐私设置来实现最大的社会效益。因此,用户的选择是基于隐私风险和利益,这一观点得到了欧盟通用数据保护条例(GDPR)的支持。我们已经在不同地区和社会福利要求的用户档案上测试了我们的方法。
{"title":"Enabling Users to Balance Social Benefit and Privacy in Online Social Networks","authors":"S. De, Abdessamad Imine","doi":"10.1109/PST.2018.8514202","DOIUrl":"https://doi.org/10.1109/PST.2018.8514202","url":null,"abstract":"Attributes such as interests, workplace and relationship status in an Online Social Network (OSN) profile introduce a user to other OSN users. They can contribute to building new friendships as well as reviving and enhancing existing ones. However, the personal data revealed by the user himself or by his vicinity, i.e., his OSN friends, can also make him vulnerable to many privacy harms such as identity theft, stalking or sexual predation. So users have to carefully select the privacy settings for their profile attributes by keeping in mind the trade-off between privacy and social benefit. In this paper, we propose a usercentric two-phase approach, based on Integer Programming, to choose the right privacy settings. Our model assists the user to understand which privacy harms he can avoid, after tolerating residual risks, given his desired social benefit requirements and suggests the privacy settings he should adopt to achieve the maximum social benefit. Thus, users’ choices are based on both privacy risks and benefits, a view supported by the EU General Data Protection Regulation (GDPR). We have tested our approach on user profiles with varying vicinities and social benefit requirements.","PeriodicalId":265506,"journal":{"name":"2018 16th Annual Conference on Privacy, Security and Trust (PST)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124551140","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-08-01DOI: 10.1109/PST.2018.8514211
Ge Chu, A. Lisitsa
Traditional penetration testing relies on the domain expert knowledge and requires considerable human effort all of which incurs a high cost. In this paper, we propose an automated penetration testing approach based on the belief-desire-intention (BDI) agent model, which is central in the research on agentbased processing in that it deals interactively with dynamic, uncertain and complex environments. Penetration testing actions are defined as a series of BDI plans and the BDI reasoning cycle is used to represent the penetration testing process. The model is extensible and new plans can be added, once they have been elicited from the human experts. We report on the results of testing of proof of concept BDI-based penetration testing tool in the simulated environment.
{"title":"Poster: Agent-based (BDI) modeling for automation of penetration testing","authors":"Ge Chu, A. Lisitsa","doi":"10.1109/PST.2018.8514211","DOIUrl":"https://doi.org/10.1109/PST.2018.8514211","url":null,"abstract":"Traditional penetration testing relies on the domain expert knowledge and requires considerable human effort all of which incurs a high cost. In this paper, we propose an automated penetration testing approach based on the belief-desire-intention (BDI) agent model, which is central in the research on agentbased processing in that it deals interactively with dynamic, uncertain and complex environments. Penetration testing actions are defined as a series of BDI plans and the BDI reasoning cycle is used to represent the penetration testing process. The model is extensible and new plans can be added, once they have been elicited from the human experts. We report on the results of testing of proof of concept BDI-based penetration testing tool in the simulated environment.","PeriodicalId":265506,"journal":{"name":"2018 16th Annual Conference on Privacy, Security and Trust (PST)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128433341","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-08-01DOI: 10.1109/PST.2018.8514182
S. Wüller, Benjamin Assadsolimani, Ulrike Meyer, S. Wetzel
A subgraph check is a variant of the common subgraph matching-operating on a reference and a test graph- determining whether a test graph is a subgraph of the reference graph. In this paper, we present two novel privacy-preserving subgraph checking protocols. In our first protocol, all subgraph checks are carried out independently of each other. The second protocol allows for a substantial performance improvement over the straight-forward approach of the first protocol by exploiting structural similarities among the test graphs to be checked against the reference graph.
{"title":"Privacy-Preserving Subgraph Checking","authors":"S. Wüller, Benjamin Assadsolimani, Ulrike Meyer, S. Wetzel","doi":"10.1109/PST.2018.8514182","DOIUrl":"https://doi.org/10.1109/PST.2018.8514182","url":null,"abstract":"A subgraph check is a variant of the common subgraph matching-operating on a reference and a test graph- determining whether a test graph is a subgraph of the reference graph. In this paper, we present two novel privacy-preserving subgraph checking protocols. In our first protocol, all subgraph checks are carried out independently of each other. The second protocol allows for a substantial performance improvement over the straight-forward approach of the first protocol by exploiting structural similarities among the test graphs to be checked against the reference graph.","PeriodicalId":265506,"journal":{"name":"2018 16th Annual Conference on Privacy, Security and Trust (PST)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134281291","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-08-01DOI: 10.1109/PST.2018.8514204
Ana C. Carvalho, Rolando Martins, L. Antunes
While the importance of consent request in today's society is increasing, specially online as a lawful basis for the processing of personal data, no detailed analysis of current technological solutions is available. In this work, we describe the existing technological solutions to express online consent in a positive fashion, including all the properties that an online solution should hold. We conclude by offering a risk proposal based on the linear combination of the rating of each one of these properties. We observe a low agreement between observers, highlighting that it is not easy to fulfill the requirements of the GDPR and showing that these studies are important when performing a Data Protection Impact Assessment. To overcome the low agreement, we propose the median of the observers' rate.
{"title":"How-to Express Explicit and Auditable Consent","authors":"Ana C. Carvalho, Rolando Martins, L. Antunes","doi":"10.1109/PST.2018.8514204","DOIUrl":"https://doi.org/10.1109/PST.2018.8514204","url":null,"abstract":"While the importance of consent request in today's society is increasing, specially online as a lawful basis for the processing of personal data, no detailed analysis of current technological solutions is available. In this work, we describe the existing technological solutions to express online consent in a positive fashion, including all the properties that an online solution should hold. We conclude by offering a risk proposal based on the linear combination of the rating of each one of these properties. We observe a low agreement between observers, highlighting that it is not easy to fulfill the requirements of the GDPR and showing that these studies are important when performing a Data Protection Impact Assessment. To overcome the low agreement, we propose the median of the observers' rate.","PeriodicalId":265506,"journal":{"name":"2018 16th Annual Conference on Privacy, Security and Trust (PST)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126683998","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}