Pub Date : 2022-11-01Epub Date: 2022-11-07DOI: 10.1145/3559613.3563202
Hafiz Asif, Jaideep Vaidya
Symptoms-tracking applications allow crowdsensing of health and location related data from individuals to track the spread and outbreaks of infectious diseases. During the COVID-19 pandemic, for the first time in history, these apps were widely adopted across the world to combat the pandemic. However, due to the sensitive nature of the data collected by these apps, serious privacy concerns were raised and apps were critiqued for their insufficient privacy safeguards. The Covid Nearby project was launched to develop a privacy-focused symptoms-tracking app and to understand the privacy preferences of users in health emergencies. In this work, we draw on the insights from the Covid Nearby users' data, and present an analysis of the significantly varying trends in users' privacy preferences with respect to demographics, attitude towards information sharing, and health concerns, e.g. after being possibly exposed to COVID-19. These results and insights can inform health informatics researchers and policy designers in developing more socially acceptable health apps in the future.
{"title":"A Study of Users' Privacy Preferences for Data Sharing on Symptoms-Tracking/Health App.","authors":"Hafiz Asif, Jaideep Vaidya","doi":"10.1145/3559613.3563202","DOIUrl":"10.1145/3559613.3563202","url":null,"abstract":"<p><p>Symptoms-tracking applications allow crowdsensing of health and location related data from individuals to track the spread and outbreaks of infectious diseases. During the COVID-19 pandemic, for the first time in history, these apps were widely adopted across the world to combat the pandemic. However, due to the sensitive nature of the data collected by these apps, serious privacy concerns were raised and apps were critiqued for their insufficient privacy safeguards. The Covid Nearby project was launched to develop a privacy-focused symptoms-tracking app and to understand the privacy preferences of users in health emergencies. In this work, we draw on the insights from the Covid Nearby users' data, and present an analysis of the significantly varying trends in users' privacy preferences with respect to demographics, attitude towards information sharing, and health concerns, e.g. after being possibly exposed to COVID-19. These results and insights can inform health informatics researchers and policy designers in developing more socially acceptable health apps in the future.</p>","PeriodicalId":74537,"journal":{"name":"Proceedings of the ACM Workshop on Privacy in the Electronic Society. ACM Workshop on Privacy in the Electronic Society","volume":"2022 ","pages":"109-113"},"PeriodicalIF":0.0,"publicationDate":"2022-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9731474/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10729960","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-11-01Epub Date: 2020-11-09DOI: 10.1145/3411497.3420214
Emre Yilmaz, Tianxi Ji, Erman Ayday, Pan Li
Although genomic data has significant impact and widespread usage in medical research, it puts individuals' privacy in danger, even if they anonymously or partially share their genomic data. To address this problem, we present a framework that is inspired from differential privacy for sharing individuals' genomic data while preserving their privacy. We assume an individual with some sensitive portion on her genome (e.g., mutations or single nucleotide polymorphisms - SNPs that reveal sensitive information about the individual) that she does not want to share. The goals of the individual are to (i) preserve the privacy of her sensitive data (considering the correlations between the sensitive and non-sensitive part), (ii) preserve the privacy of interdependent data (data that belongs to other individuals that is correlated with her data), and (iii) share as much non-sensitive data as possible to maximize utility of data sharing. As opposed to traditional differential privacy-based data sharing schemes, the proposed scheme does not intentionally add noise to data; it is based on selective sharing of data points. We observe that traditional differential privacy concept does not capture sharing data in such a setting, and hence we first introduce a privacy notation, ϵ-indirect privacy, that addresses data sharing in such settings. We show that the proposed framework does not provide sensitive information to the attacker while it provides a high data sharing utility. We also compare the proposed technique with the previous ones and show our advantage both in terms of privacy and data sharing utility.
{"title":"Preserving Genomic Privacy via Selective Sharing.","authors":"Emre Yilmaz, Tianxi Ji, Erman Ayday, Pan Li","doi":"10.1145/3411497.3420214","DOIUrl":"10.1145/3411497.3420214","url":null,"abstract":"<p><p>Although genomic data has significant impact and widespread usage in medical research, it puts individuals' privacy in danger, even if they anonymously or partially share their genomic data. To address this problem, we present a framework that is inspired from differential privacy for sharing individuals' genomic data while preserving their privacy. We assume an individual with some sensitive portion on her genome (e.g., mutations or single nucleotide polymorphisms - SNPs that reveal sensitive information about the individual) that she does not want to share. The goals of the individual are to (i) preserve the privacy of her sensitive data (considering the correlations between the sensitive and non-sensitive part), (ii) preserve the privacy of interdependent data (data that belongs to other individuals that is correlated with her data), and (iii) share as much non-sensitive data as possible to maximize utility of data sharing. As opposed to traditional differential privacy-based data sharing schemes, the proposed scheme does not intentionally add noise to data; it is based on selective sharing of data points. We observe that traditional differential privacy concept does not capture sharing data in such a setting, and hence we first introduce a privacy notation, <i>ϵ</i>-indirect privacy, that addresses data sharing in such settings. We show that the proposed framework does not provide sensitive information to the attacker while it provides a high data sharing utility. We also compare the proposed technique with the previous ones and show our advantage both in terms of privacy and data sharing utility.</p>","PeriodicalId":74537,"journal":{"name":"Proceedings of the ACM Workshop on Privacy in the Electronic Society. ACM Workshop on Privacy in the Electronic Society","volume":"2020 ","pages":"163-179"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8411901/pdf/nihms-1705344.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39387493","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
P. Leon, Justin Cranshaw, L. Cranor, Jim Graves, Manoj Hastak, Blase Ur, Guzi Xu
Online Behavioral Advertising (OBA), the practice of tailoring ads based on an individual's online activities, has led to privacy concerns. In an attempt to mitigate these privacy concerns, the online advertising industry has proposed the use of OBA disclosures: icons, accompanying taglines, and landing pages intended to inform users about OBA and provide opt-out options. We conducted a 1,505-participant online study to investigate Internet users' perceptions of OBA disclosures. The disclosures failed to clearly notify participants about OBA and inform them about their choices. Half of the participants remembered the ads they saw but only 12% correctly remembered the disclosure taglines attached to ads. When shown the disclosures again, the majority mistakenly believed that ads would pop up if they clicked on disclosures, and more participants incorrectly thought that clicking the disclosures would let them purchase advertisements than correctly understood that they could then opt out of OBA. "AdChoices", the most commonly used tagline, was particularly ineffective at communicating notice and choice. A majority of participants mistakenly believed that opting out would stop all online tracking, not just tailored ads. We dicuss challenges in crafting disclosures and provide suggestions for improvement.
{"title":"What do online behavioral advertising privacy disclosures communicate to users?","authors":"P. Leon, Justin Cranshaw, L. Cranor, Jim Graves, Manoj Hastak, Blase Ur, Guzi Xu","doi":"10.1145/2381966.2381970","DOIUrl":"https://doi.org/10.1145/2381966.2381970","url":null,"abstract":"Online Behavioral Advertising (OBA), the practice of tailoring ads based on an individual's online activities, has led to privacy concerns. In an attempt to mitigate these privacy concerns, the online advertising industry has proposed the use of OBA disclosures: icons, accompanying taglines, and landing pages intended to inform users about OBA and provide opt-out options. We conducted a 1,505-participant online study to investigate Internet users' perceptions of OBA disclosures. The disclosures failed to clearly notify participants about OBA and inform them about their choices. Half of the participants remembered the ads they saw but only 12% correctly remembered the disclosure taglines attached to ads. When shown the disclosures again, the majority mistakenly believed that ads would pop up if they clicked on disclosures, and more participants incorrectly thought that clicking the disclosures would let them purchase advertisements than correctly understood that they could then opt out of OBA. \"AdChoices\", the most commonly used tagline, was particularly ineffective at communicating notice and choice. A majority of participants mistakenly believed that opting out would stop all online tracking, not just tailored ads. We dicuss challenges in crafting disclosures and provide suggestions for improvement.","PeriodicalId":74537,"journal":{"name":"Proceedings of the ACM Workshop on Privacy in the Electronic Society. ACM Workshop on Privacy in the Electronic Society","volume":"12 1","pages":"19-30"},"PeriodicalIF":0.0,"publicationDate":"2012-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85229319","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Elisa Costante, Yuanhao Sun, M. Petkovic, J. D. Hartog
A privacy policy is a legal document, used by websites to communicate how the personal data that they collect will be managed. By accepting it, the user agrees to release his data under the conditions stated by the policy. Privacy policies should provide enough information to enable users to make informed decisions. Privacy regulations support this by specifying what kind of information has to be provided. As privacy policies can be long and difficult to understand, users tend not to read them. Because of this, users generally agree with a policy without knowing what it states and whether aspects important to him are covered at all. In this paper we present a solution to assist the user by providing a structured way to browse the policy content and by automatically assessing the completeness of a policy, i.e. the degree of coverage of privacy categories important to the user. The privacy categories are extracted from privacy regulations, while text categorization and machine learning techniques are used to verify which categories are covered by a policy. The results show the feasibility of our approach; an automatic classifier, able to associate the right category to paragraphs of a policy with an accuracy approximating that obtainable by a human judge, can be effectively created.
{"title":"A machine learning solution to assess privacy policy completeness: (short paper)","authors":"Elisa Costante, Yuanhao Sun, M. Petkovic, J. D. Hartog","doi":"10.1145/2381966.2381979","DOIUrl":"https://doi.org/10.1145/2381966.2381979","url":null,"abstract":"A privacy policy is a legal document, used by websites to communicate how the personal data that they collect will be managed. By accepting it, the user agrees to release his data under the conditions stated by the policy. Privacy policies should provide enough information to enable users to make informed decisions. Privacy regulations support this by specifying what kind of information has to be provided. As privacy policies can be long and difficult to understand, users tend not to read them. Because of this, users generally agree with a policy without knowing what it states and whether aspects important to him are covered at all. In this paper we present a solution to assist the user by providing a structured way to browse the policy content and by automatically assessing the completeness of a policy, i.e. the degree of coverage of privacy categories important to the user. The privacy categories are extracted from privacy regulations, while text categorization and machine learning techniques are used to verify which categories are covered by a policy. The results show the feasibility of our approach; an automatic classifier, able to associate the right category to paragraphs of a policy with an accuracy approximating that obtainable by a human judge, can be effectively created.","PeriodicalId":74537,"journal":{"name":"Proceedings of the ACM Workshop on Privacy in the Electronic Society. ACM Workshop on Privacy in the Electronic Society","volume":"14 1","pages":"91-96"},"PeriodicalIF":0.0,"publicationDate":"2012-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77096843","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The onion routing (OR) network Tor provides privacy to Internet users by facilitating anonymous web browsing. It achieves anonymity by routing encrypted traffic across a few routers, where the required encryption keys are established using a key exchange protocol. Goldberg, Stebila and Ustaoglu recently characterized the security and privacy properties required by the key exchange protocol used in the OR network. They defined the concept of one-way authenticated key exchange (1W-AKE) and presented a provably secure 1W-AKE protocol called ntor, which is under consideration for deployment in Tor. In this paper, we present a novel 1W-AKE protocol Ace that improves on the computation costs of ntor: in numbers, the client has an efficiency improvement of 46% and the server of nearly 19%. As far as communication costs are concerned, our protocol requires a client to send one additional group element to a server, compared to the ntor protocol. However, an additional group element easily fits into the 512 bytes fix-sized Tor packets (or cell) in the elliptic curve cryptography (ECC) setting. Consequently, our protocol does not produce a communication overhead in the Tor protocol. Moreover, we prove that our protocol Ace constitutes a 1W-AKE. Given that the ECC setting is under consideration for the Tor system, the improved computational efficiency, and the proven security properties make our 1W-AKE an ideal candidate for use in the Tor protocol.
{"title":"Ace: an efficient key-exchange protocol for onion routing","authors":"M. Backes, Aniket Kate, Esfandiar Mohammadi","doi":"10.1145/2381966.2381974","DOIUrl":"https://doi.org/10.1145/2381966.2381974","url":null,"abstract":"The onion routing (OR) network Tor provides privacy to Internet users by facilitating anonymous web browsing. It achieves anonymity by routing encrypted traffic across a few routers, where the required encryption keys are established using a key exchange protocol. Goldberg, Stebila and Ustaoglu recently characterized the security and privacy properties required by the key exchange protocol used in the OR network. They defined the concept of one-way authenticated key exchange (1W-AKE) and presented a provably secure 1W-AKE protocol called ntor, which is under consideration for deployment in Tor.\u0000 In this paper, we present a novel 1W-AKE protocol Ace that improves on the computation costs of ntor: in numbers, the client has an efficiency improvement of 46% and the server of nearly 19%. As far as communication costs are concerned, our protocol requires a client to send one additional group element to a server, compared to the ntor protocol. However, an additional group element easily fits into the 512 bytes fix-sized Tor packets (or cell) in the elliptic curve cryptography (ECC) setting. Consequently, our protocol does not produce a communication overhead in the Tor protocol. Moreover, we prove that our protocol Ace constitutes a 1W-AKE. Given that the ECC setting is under consideration for the Tor system, the improved computational efficiency, and the proven security properties make our 1W-AKE an ideal candidate for use in the Tor protocol.","PeriodicalId":74537,"journal":{"name":"Proceedings of the ACM Workshop on Privacy in the Electronic Society. ACM Workshop on Privacy in the Electronic Society","volume":"1 1","pages":"55-64"},"PeriodicalIF":0.0,"publicationDate":"2012-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88603639","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Bingshuang Liu, Zhaoyang Liu, Jianyu Zhang, Tao Wei, Wei Zou
Today peer-to-peer (P2P) file sharing networks help tens of millions of users to share contents on the Internet. However, users' private files in their shared folders might become accessible to everybody inadvertently. In this paper, we investigate this kind of user privacy exposures in Kad, one of the biggest P2P file sharing networks, and try to answer two questions: Q1. Whether and to what extent does this problem exist in current systems? Q2. Are attackers aware of this privacy vulnerability and are they abusing obtained private infortion? We build a monitoring system called Dragonfly based on the eclipse mechanism to passively monitor sharing and downloading events in Kad. We also use the Honeyfile approach to share forged private information to observe attackers' behaviors. Based on Dragonfly and Honeyfiles, we give affirmative answers to the above two questions. Within two weeks, more than five thousand private files related to ten sensitive keywords were shared by Kad users, and over half of them come from Italy and Spain. Within one month, each honey file was downloaded for about 40 times in average, and its inner password information was exploited for 25 times. These results show that this privacy problem has become a serious threat for P2P users. Finally, we design and implement Numen, a plug-in for eMule, which can effectively protect user private files from being shared without notice.
{"title":"How many eyes are spying on your shared folders?","authors":"Bingshuang Liu, Zhaoyang Liu, Jianyu Zhang, Tao Wei, Wei Zou","doi":"10.1145/2381966.2381982","DOIUrl":"https://doi.org/10.1145/2381966.2381982","url":null,"abstract":"Today peer-to-peer (P2P) file sharing networks help tens of millions of users to share contents on the Internet. However, users' private files in their shared folders might become accessible to everybody inadvertently. In this paper, we investigate this kind of user privacy exposures in Kad, one of the biggest P2P file sharing networks, and try to answer two questions: Q1. Whether and to what extent does this problem exist in current systems? Q2. Are attackers aware of this privacy vulnerability and are they abusing obtained private infortion?\u0000 We build a monitoring system called Dragonfly based on the eclipse mechanism to passively monitor sharing and downloading events in Kad. We also use the Honeyfile approach to share forged private information to observe attackers' behaviors. Based on Dragonfly and Honeyfiles, we give affirmative answers to the above two questions. Within two weeks, more than five thousand private files related to ten sensitive keywords were shared by Kad users, and over half of them come from Italy and Spain. Within one month, each honey file was downloaded for about 40 times in average, and its inner password information was exploited for 25 times. These results show that this privacy problem has become a serious threat for P2P users. Finally, we design and implement Numen, a plug-in for eMule, which can effectively protect user private files from being shared without notice.","PeriodicalId":74537,"journal":{"name":"Proceedings of the ACM Workshop on Privacy in the Electronic Society. ACM Workshop on Privacy in the Electronic Society","volume":"36 1","pages":"109-116"},"PeriodicalIF":0.0,"publicationDate":"2012-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87273522","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Caitlin R. Orr, A. Chauhan, Minaxi Gupta, Chris Frisz, Christopher W. Dunn
Motivated by reasons related to privacy, obtrusiveness, and security, there is great interest in the prospect of blocking advertisements. Current approaches to this goal involve keeping sets of URL-based regular expressions, which are matched against every URL fetched on a web page. While generally effective, this approach is not scalable and requires constant manual maintenance of the filtering lists. To counter these shortcomings, we present a fundamentally different approach with which we demonstrate that static program analysis on JavaScript source code can be used to identify JavaScript that loads and displays ads. Our use of static analysis lets us flag and block ad-related scripts before runtime, offering security in addition to blocking ads. Preliminary results from a classifier trained on the features we develop achieve 98% accuracy in identifying ad-related scripts.
{"title":"An approach for identifying JavaScript-loaded advertisements through static program analysis","authors":"Caitlin R. Orr, A. Chauhan, Minaxi Gupta, Chris Frisz, Christopher W. Dunn","doi":"10.1145/2381966.2381968","DOIUrl":"https://doi.org/10.1145/2381966.2381968","url":null,"abstract":"Motivated by reasons related to privacy, obtrusiveness, and security, there is great interest in the prospect of blocking advertisements. Current approaches to this goal involve keeping sets of URL-based regular expressions, which are matched against every URL fetched on a web page. While generally effective, this approach is not scalable and requires constant manual maintenance of the filtering lists. To counter these shortcomings, we present a fundamentally different approach with which we demonstrate that static program analysis on JavaScript source code can be used to identify JavaScript that loads and displays ads. Our use of static analysis lets us flag and block ad-related scripts before runtime, offering security in addition to blocking ads. Preliminary results from a classifier trained on the features we develop achieve 98% accuracy in identifying ad-related scripts.","PeriodicalId":74537,"journal":{"name":"Proceedings of the ACM Workshop on Privacy in the Electronic Society. ACM Workshop on Privacy in the Electronic Society","volume":"68 1","pages":"1-12"},"PeriodicalIF":0.0,"publicationDate":"2012-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82452110","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
T. Elahi, Kevin S. Bauer, Mashael Alsabah, Roger Dingledine, I. Goldberg
Tor is the most popular low-latency anonymity overlay network for the Internet, protecting the privacy of hundreds of thousands of people every day. To ensure a high level of security against certain attacks, Tor currently utilizes special nodes called entry guards as each client's long-term entry point into the anonymity network. While the use of entry guards provides clear and well-studied security benefits, it is unclear how well the current entry guard design achieves its security goals in practice. We design and implement Changing of the Guards (COGS), a simulation-based research framework to study Tor's entry guard design. Using COGS, we empirically demonstrate that natural, short-term entry guard churn and explicit time-based entry guard rotation contribute to clients using more entry guards than they should, and thus increase the likelihood of profiling attacks. This churn significantly degrades Tor clients' anonymity. To understand the security and performance implications of current and alternative entry guard selection algorithms, we simulate tens of thousands of Tor clients using COGS based on Tor's entry guard selection and rotation algorithms, with real entry guard data collected over the course of eight months from the live Tor network.
{"title":"Changing of the guards: a framework for understanding and improving entry guard selection in tor","authors":"T. Elahi, Kevin S. Bauer, Mashael Alsabah, Roger Dingledine, I. Goldberg","doi":"10.1145/2381966.2381973","DOIUrl":"https://doi.org/10.1145/2381966.2381973","url":null,"abstract":"Tor is the most popular low-latency anonymity overlay network for the Internet, protecting the privacy of hundreds of thousands of people every day. To ensure a high level of security against certain attacks, Tor currently utilizes special nodes called entry guards as each client's long-term entry point into the anonymity network. While the use of entry guards provides clear and well-studied security benefits, it is unclear how well the current entry guard design achieves its security goals in practice.\u0000 We design and implement Changing of the Guards (COGS), a simulation-based research framework to study Tor's entry guard design. Using COGS, we empirically demonstrate that natural, short-term entry guard churn and explicit time-based entry guard rotation contribute to clients using more entry guards than they should, and thus increase the likelihood of profiling attacks. This churn significantly degrades Tor clients' anonymity. To understand the security and performance implications of current and alternative entry guard selection algorithms, we simulate tens of thousands of Tor clients using COGS based on Tor's entry guard selection and rotation algorithms, with real entry guard data collected over the course of eight months from the live Tor network.","PeriodicalId":74537,"journal":{"name":"Proceedings of the ACM Workshop on Privacy in the Electronic Society. ACM Workshop on Privacy in the Electronic Society","volume":"66 1","pages":"43-54"},"PeriodicalIF":0.0,"publicationDate":"2012-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85631893","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This work seeks to understand what "they" (Web advertisers) actually do with the information available to them. We analyze the ads shown to users during controlled browsing as well as examine the inferred demographics and interests shown in Ad Preference Managers provided by advertisers. In an initial study of ad networks and a focused study of the Google ad network, we found many expected contextual, behavioral and location-based ads along with combinations of these types of ads. We also observed profile-based ads. Most behavioral ads were shown as categories in the Ad Preference Manager (APM) of the ad network, but we found unexpected cases where the interests were not visible in the APM. We also found unexpected behavior for the Google ad network in that non-contextual ads were shown related to induced sensitive topics regarding sexual orientation, health and financial matters. In a smaller study of Facebook, we did not find clear evidence that a user's browsing behavior on non-Facebook sites influences the ads shown to the user on Facebook, but we did observe such influence when the Facebook Like button is used to express interest in content. We did observe Facebook ads appearing to target users for sensitive interests with some ads even asserting such sensitive information, which appears to be a violation of Facebook's stated policy.
{"title":"Understanding what they do with what they know","authors":"C. Wills, Can Tatar","doi":"10.1145/2381966.2381969","DOIUrl":"https://doi.org/10.1145/2381966.2381969","url":null,"abstract":"This work seeks to understand what \"they\" (Web advertisers) actually do with the information available to them. We analyze the ads shown to users during controlled browsing as well as examine the inferred demographics and interests shown in Ad Preference Managers provided by advertisers.\u0000 In an initial study of ad networks and a focused study of the Google ad network, we found many expected contextual, behavioral and location-based ads along with combinations of these types of ads. We also observed profile-based ads. Most behavioral ads were shown as categories in the Ad Preference Manager (APM) of the ad network, but we found unexpected cases where the interests were not visible in the APM. We also found unexpected behavior for the Google ad network in that non-contextual ads were shown related to induced sensitive topics regarding sexual orientation, health and financial matters.\u0000 In a smaller study of Facebook, we did not find clear evidence that a user's browsing behavior on non-Facebook sites influences the ads shown to the user on Facebook, but we did observe such influence when the Facebook Like button is used to express interest in content. We did observe Facebook ads appearing to target users for sensitive interests with some ads even asserting such sensitive information, which appears to be a violation of Facebook's stated policy.","PeriodicalId":74537,"journal":{"name":"Proceedings of the ACM Workshop on Privacy in the Electronic Society. ACM Workshop on Privacy in the Electronic Society","volume":"25 1","pages":"13-18"},"PeriodicalIF":0.0,"publicationDate":"2012-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80637320","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper describes BTP, a protocol that ensures the confidentiality, integrity, authenticity and forward secrecy of communication over diverse underlying transports, from low-latency, bidirectional transports like TCP to high-latency, unidirectional transports like DVDs sent through the mail. BTP is designed for use in censorship-resistant delay-tolerant overlays that operate over heterogeneous mixtures of underlying transports. By providing consistent security properties for a very wide range of transports, BTP simplifies the design and implementation of such overlays. Forward secrecy is achieved by establishing an initial shared secret between each pair of endpoint devices and using a one-way key derivation function to generate a series of temporary shared secrets from the initial shared secret. Once both devices have destroyed a given temporary secret, any keys derived from it cannot be re-derived if the devices are later compromised. BTP is designed to be compatible with traffic analysis prevention techniques such as traffic morphing: the protocol includes optional padding and uses no timeouts, handshakes or plaintext headers, with the goal of making it difficult to distinguish BTP from other protocols. If unlinkability between communicating devices is required, BTP can use anonymity systems such as Tor and Mixminion as underlying transports.
{"title":"Secure communication over diverse transports: [short paper]","authors":"M. Rogers, Eleanor Saitta","doi":"10.1145/2381966.2381977","DOIUrl":"https://doi.org/10.1145/2381966.2381977","url":null,"abstract":"This paper describes BTP, a protocol that ensures the confidentiality, integrity, authenticity and forward secrecy of communication over diverse underlying transports, from low-latency, bidirectional transports like TCP to high-latency, unidirectional transports like DVDs sent through the mail.\u0000 BTP is designed for use in censorship-resistant delay-tolerant overlays that operate over heterogeneous mixtures of underlying transports. By providing consistent security properties for a very wide range of transports, BTP simplifies the design and implementation of such overlays.\u0000 Forward secrecy is achieved by establishing an initial shared secret between each pair of endpoint devices and using a one-way key derivation function to generate a series of temporary shared secrets from the initial shared secret. Once both devices have destroyed a given temporary secret, any keys derived from it cannot be re-derived if the devices are later compromised.\u0000 BTP is designed to be compatible with traffic analysis prevention techniques such as traffic morphing: the protocol includes optional padding and uses no timeouts, handshakes or plaintext headers, with the goal of making it difficult to distinguish BTP from other protocols. If unlinkability between communicating devices is required, BTP can use anonymity systems such as Tor and Mixminion as underlying transports.","PeriodicalId":74537,"journal":{"name":"Proceedings of the ACM Workshop on Privacy in the Electronic Society. ACM Workshop on Privacy in the Electronic Society","volume":"15 1","pages":"75-80"},"PeriodicalIF":0.0,"publicationDate":"2012-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74424010","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}