{"title":"大规模个人数据的隐私保护程序","authors":"Julius Adebayo, Lalana Kagal","doi":"10.1109/ISI.2015.7165950","DOIUrl":null,"url":null,"abstract":"We present a transformation procedure for large scale individual level data that produces output data in which no linear combinations of the resulting attributes can yield the original sensitive attributes from the transformed data. In doing this, our procedure eliminates all linear information regarding a sensitive attribute from the input data. The algorithm combines principal components analysis of the data set with orthogonal projection onto the subspace containing the sensitive attribute(s). The algorithm presented is motivated by applications where there is a need to drastically `sanitize' a data set of all information relating to sensitive attribute(s) before analysis of the data using a data mining algorithm. Sensitive attribute removal (sanitization) is often needed to prevent disparate impact and discrimination on the basis of race, gender, and sexual orientation in high stakes contexts such as determination of access to loans, credit, employment, and insurance. We show through experiments that our proposed algorithm outperforms other privacy preserving techniques by more than 20 percent in lowering the ability to reconstruct sensitive attributes from large scale data.","PeriodicalId":292352,"journal":{"name":"2015 IEEE International Conference on Intelligence and Security Informatics (ISI)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2015-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":"{\"title\":\"A privacy protection procedure for large scale individual level data\",\"authors\":\"Julius Adebayo, Lalana Kagal\",\"doi\":\"10.1109/ISI.2015.7165950\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"We present a transformation procedure for large scale individual level data that produces output data in which no linear combinations of the resulting attributes can yield the original sensitive attributes from the transformed data. In doing this, our procedure eliminates all linear information regarding a sensitive attribute from the input data. The algorithm combines principal components analysis of the data set with orthogonal projection onto the subspace containing the sensitive attribute(s). The algorithm presented is motivated by applications where there is a need to drastically `sanitize' a data set of all information relating to sensitive attribute(s) before analysis of the data using a data mining algorithm. Sensitive attribute removal (sanitization) is often needed to prevent disparate impact and discrimination on the basis of race, gender, and sexual orientation in high stakes contexts such as determination of access to loans, credit, employment, and insurance. We show through experiments that our proposed algorithm outperforms other privacy preserving techniques by more than 20 percent in lowering the ability to reconstruct sensitive attributes from large scale data.\",\"PeriodicalId\":292352,\"journal\":{\"name\":\"2015 IEEE International Conference on Intelligence and Security Informatics (ISI)\",\"volume\":\"17 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2015-05-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"4\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2015 IEEE International Conference on Intelligence and Security Informatics (ISI)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ISI.2015.7165950\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2015 IEEE International Conference on Intelligence and Security Informatics (ISI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ISI.2015.7165950","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
A privacy protection procedure for large scale individual level data
We present a transformation procedure for large scale individual level data that produces output data in which no linear combinations of the resulting attributes can yield the original sensitive attributes from the transformed data. In doing this, our procedure eliminates all linear information regarding a sensitive attribute from the input data. The algorithm combines principal components analysis of the data set with orthogonal projection onto the subspace containing the sensitive attribute(s). The algorithm presented is motivated by applications where there is a need to drastically `sanitize' a data set of all information relating to sensitive attribute(s) before analysis of the data using a data mining algorithm. Sensitive attribute removal (sanitization) is often needed to prevent disparate impact and discrimination on the basis of race, gender, and sexual orientation in high stakes contexts such as determination of access to loans, credit, employment, and insurance. We show through experiments that our proposed algorithm outperforms other privacy preserving techniques by more than 20 percent in lowering the ability to reconstruct sensitive attributes from large scale data.