The way we deal with information has changed significantly over the last years. More and more private data is published on the Internet, and at the same time our capacity to store and process data has vastly increased. Systems to prevent a large-scale data collection by placing an "expiration date" on digital data have been proposed before, but either they only support very short expiration times of a few days (such as Vanish and EphPub), or they require additional infrastructure (such as FaceCloak and X-pire). We propose a system that (i) implements expiration times of several month and does this (ii) based on existing infrastructure only; to the best of our knowledge this is the first system to have both properties at the same time. We exploit the fact that many webpages continuously change over time: We extract several key-shares from random webpages and use a threshold secret sharing scheme to reconstruct the correct key if enough webpages have not yet changed. After several month, enough webpages have changed to completely hide the key. For almost a year, we have collected statistics about the changes of webpages on a large random sample of webpages and have shown that expiration times of several month can be implemented reliably.
{"title":"Timed revocation of user data: long expiration times from existing infrastructure","authors":"Sirke Reimann, Markus Dürmuth","doi":"10.1145/2381966.2381976","DOIUrl":"https://doi.org/10.1145/2381966.2381976","url":null,"abstract":"The way we deal with information has changed significantly over the last years. More and more private data is published on the Internet, and at the same time our capacity to store and process data has vastly increased. Systems to prevent a large-scale data collection by placing an \"expiration date\" on digital data have been proposed before, but either they only support very short expiration times of a few days (such as Vanish and EphPub), or they require additional infrastructure (such as FaceCloak and X-pire).\u0000 We propose a system that (i) implements expiration times of several month and does this (ii) based on existing infrastructure only; to the best of our knowledge this is the first system to have both properties at the same time. We exploit the fact that many webpages continuously change over time: We extract several key-shares from random webpages and use a threshold secret sharing scheme to reconstruct the correct key if enough webpages have not yet changed. After several month, enough webpages have changed to completely hide the key.\u0000 For almost a year, we have collected statistics about the changes of webpages on a large random sample of webpages and have shown that expiration times of several month can be implemented reliably.","PeriodicalId":74537,"journal":{"name":"Proceedings of the ACM Workshop on Privacy in the Electronic Society. ACM Workshop on Privacy in the Electronic Society","volume":"148 1","pages":"65-74"},"PeriodicalIF":0.0,"publicationDate":"2012-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88652261","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tor is an onion routing network that protects users' privacy by relaying traffic through a series of nodes that run Tor software. As a consequence of the anonymity that it provides, Tor is used for many purposes. According to several measurement studies, a small fraction of users using Tor for bulk downloads account for the majority of traffic on the Tor network. These bulk downloads cause delays for interactive traffic, as many different circuits share bandwidth across each pair of nodes. The resulting delays discourage people from using Tor for normal web activity. We propose a potential solution to this problem: separate interactive and bulk traffic onto two different TCP connections between each pair of nodes. Previous proposals to improve Tor's performance for interactive traffic have focused on prioritizing traffic from less active circuits; however, these prioritization approaches are limited in the benefit they can provide, as they can only affect delays due to traffic processing in Tor itself. Our approach provides a simple way to reduce delays due to additional factors external to Tor, such as the effects of TCP congestion control and queuing of interactive traffic behind bulk traffic in buffers. We evaluate our proposal by simulating traffic using several methods and show that Torchestra provides up to 32% reduction in delays for interactive traffic compared to the Tor traffic prioritization scheme of Tang and Goldberg [18] and up to 40% decrease in delays when compared to vanilla Tor.
{"title":"Torchestra: reducing interactive traffic delays over tor","authors":"D. Gopal, N. Heninger","doi":"10.1145/2381966.2381972","DOIUrl":"https://doi.org/10.1145/2381966.2381972","url":null,"abstract":"Tor is an onion routing network that protects users' privacy by relaying traffic through a series of nodes that run Tor software. As a consequence of the anonymity that it provides, Tor is used for many purposes. According to several measurement studies, a small fraction of users using Tor for bulk downloads account for the majority of traffic on the Tor network. These bulk downloads cause delays for interactive traffic, as many different circuits share bandwidth across each pair of nodes. The resulting delays discourage people from using Tor for normal web activity.\u0000 We propose a potential solution to this problem: separate interactive and bulk traffic onto two different TCP connections between each pair of nodes. Previous proposals to improve Tor's performance for interactive traffic have focused on prioritizing traffic from less active circuits; however, these prioritization approaches are limited in the benefit they can provide, as they can only affect delays due to traffic processing in Tor itself. Our approach provides a simple way to reduce delays due to additional factors external to Tor, such as the effects of TCP congestion control and queuing of interactive traffic behind bulk traffic in buffers. We evaluate our proposal by simulating traffic using several methods and show that Torchestra provides up to 32% reduction in delays for interactive traffic compared to the Tor traffic prioritization scheme of Tang and Goldberg [18] and up to 40% decrease in delays when compared to vanilla Tor.","PeriodicalId":74537,"journal":{"name":"Proceedings of the ACM Workshop on Privacy in the Electronic Society. ACM Workshop on Privacy in the Electronic Society","volume":"37 1","pages":"31-42"},"PeriodicalIF":0.0,"publicationDate":"2012-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88542014","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. Pashalidis, Nikos Mavrogiannopoulos, Xavier Ferrer Aran, Beñat Bermejo Olaizola
This paper presents 'For Human Eyes Only' (FHEO), our Firefox extension that enables one to conveniently post online messages, such as short emails, comments, and tweets in a form that discourages automatic processing of these messages. Similar to CAPTCHA systems, FHEO distorts the text to various extents. We provide a security analysis of its four default distortion profiles as well as a usability analysis that shows how these profiles affect response time and accurate understanding. Our results illustrate the security/usability tradeoffs that arise in the face of adversaries that use current, off-the-shelf optical character recognition technology in order to launch a variety of attacks. Two profiles, in particular, achieve a level of protection that seems to justify their respective usability degradation in many situations. The 'strongest' distortion profile, however, does not seem to provide a large additional security margin against the adversaries we considered.
本文介绍了“For Human Eyes Only”(FHEO),我们的Firefox扩展,它使人们能够方便地发布在线消息,如短邮件,评论和推文,以一种不鼓励自动处理这些消息的形式。与CAPTCHA系统类似,FHEO会在不同程度上扭曲文本。我们提供了四种默认失真配置文件的安全性分析,以及显示这些配置文件如何影响响应时间和准确理解的可用性分析。我们的结果说明了在面对使用当前现成的光学字符识别技术以发起各种攻击的对手时出现的安全性/可用性权衡。特别是两个概要文件,它们实现了一定程度的保护,这似乎证明了它们在许多情况下各自的可用性退化是合理的。然而,对于我们所考虑的对手,“最强”失真配置文件似乎并没有提供很大的额外安全裕度。
{"title":"For human eyes only: security and usability evaluation","authors":"A. Pashalidis, Nikos Mavrogiannopoulos, Xavier Ferrer Aran, Beñat Bermejo Olaizola","doi":"10.1145/2381966.2381984","DOIUrl":"https://doi.org/10.1145/2381966.2381984","url":null,"abstract":"This paper presents 'For Human Eyes Only' (FHEO), our Firefox extension that enables one to conveniently post online messages, such as short emails, comments, and tweets in a form that discourages automatic processing of these messages. Similar to CAPTCHA systems, FHEO distorts the text to various extents. We provide a security analysis of its four default distortion profiles as well as a usability analysis that shows how these profiles affect response time and accurate understanding. Our results illustrate the security/usability tradeoffs that arise in the face of adversaries that use current, off-the-shelf optical character recognition technology in order to launch a variety of attacks. Two profiles, in particular, achieve a level of protection that seems to justify their respective usability degradation in many situations. The 'strongest' distortion profile, however, does not seem to provide a large additional security margin against the adversaries we considered.","PeriodicalId":74537,"journal":{"name":"Proceedings of the ACM Workshop on Privacy in the Electronic Society. ACM Workshop on Privacy in the Electronic Society","volume":"84 1","pages":"129-140"},"PeriodicalIF":0.0,"publicationDate":"2012-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74224436","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Aarathi Prasad, Jacob M. Sorber, Timothy Stablein, D. Anthony, D. Kotz
If people are not in control of the collection and sharing of their personal health information collected using mobile health (mHealth) devices and applications, privacy concerns could limit their willingness to use and reduce potential benefits provided via mHealth. We investigated users' willingness to share their personal information, collected using mHealth sensing devices, with their family, friends, third parties, and the public. Previous work employed hypothetical scenarios, surveys and interviews to understand people's information-sharing behavior; to the best of our knowledge, ours is the first privacy study where participants actually have the option to share their own information with real people. We expect our results can guide the development of privacy controls for mobile devices and applications that collect any personal and activity information, not restricted to health or fitness information. Our study revealed three interesting findings about people's privacy concerns regarding their sensed health information: 1) We found that people share certain health information less with friends and family than with strangers, but more with specific third parties than the public. 2) Information that people were less willing to share could be information that is indirectly collected by the mobile devices. 3) We confirmed that privacy concerns are not static; mHealth device users may change their sharing decisions over time. Based on our findings, we emphasize the need for sensible default settings and flexible privacy controls to allow people to choose different settings for different recipients, and to change their sharing settings at any time.
{"title":"Understanding sharing preferences and behavior for mHealth devices","authors":"Aarathi Prasad, Jacob M. Sorber, Timothy Stablein, D. Anthony, D. Kotz","doi":"10.1145/2381966.2381983","DOIUrl":"https://doi.org/10.1145/2381966.2381983","url":null,"abstract":"If people are not in control of the collection and sharing of their personal health information collected using mobile health (mHealth) devices and applications, privacy concerns could limit their willingness to use and reduce potential benefits provided via mHealth. We investigated users' willingness to share their personal information, collected using mHealth sensing devices, with their family, friends, third parties, and the public. Previous work employed hypothetical scenarios, surveys and interviews to understand people's information-sharing behavior; to the best of our knowledge, ours is the first privacy study where participants actually have the option to share their own information with real people. We expect our results can guide the development of privacy controls for mobile devices and applications that collect any personal and activity information, not restricted to health or fitness information.\u0000 Our study revealed three interesting findings about people's privacy concerns regarding their sensed health information: 1) We found that people share certain health information less with friends and family than with strangers, but more with specific third parties than the public. 2) Information that people were less willing to share could be information that is indirectly collected by the mobile devices. 3) We confirmed that privacy concerns are not static; mHealth device users may change their sharing decisions over time. Based on our findings, we emphasize the need for sensible default settings and flexible privacy controls to allow people to choose different settings for different recipients, and to change their sharing settings at any time.","PeriodicalId":74537,"journal":{"name":"Proceedings of the ACM Workshop on Privacy in the Electronic Society. ACM Workshop on Privacy in the Electronic Society","volume":"14 3 1","pages":"117-128"},"PeriodicalIF":0.0,"publicationDate":"2012-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89150873","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Henry Deyoung, D. Garg, Limin Jia, D. Kaynar, Anupam Datta
Despite the wide array of frameworks proposed for the formal specification and analysis of privacy laws, there has been comparatively little work on expressing large fragments of actual privacy laws in these frameworks. We attempt to bridge this gap by giving complete logical formalizations of the transmission-related portions of the Health Insurance Portability and Accountability Act (HIPAA) and the Gramm-Leach-Bliley Act (GLBA). To this end, we develop the PrivacyLFP logic, whose features include support for disclosure purposes, real-time constructs, and self-reference via fixed points. To illustrate these features and demonstrate PrivacyLFP's utility, we present formalizations of a collection of clauses from these laws. Due to their size, our full formalizations of HIPAA and GLBA appear in a companion technical report. We discuss ambiguities in the laws that our formalizations revealed and sketch preliminary ideas for computer-assisted enforcement of such privacy policies.
{"title":"Experiences in the logical specification of the HIPAA and GLBA privacy laws","authors":"Henry Deyoung, D. Garg, Limin Jia, D. Kaynar, Anupam Datta","doi":"10.1145/1866919.1866930","DOIUrl":"https://doi.org/10.1145/1866919.1866930","url":null,"abstract":"Despite the wide array of frameworks proposed for the formal specification and analysis of privacy laws, there has been comparatively little work on expressing large fragments of actual privacy laws in these frameworks. We attempt to bridge this gap by giving complete logical formalizations of the transmission-related portions of the Health Insurance Portability and Accountability Act (HIPAA) and the Gramm-Leach-Bliley Act (GLBA). To this end, we develop the PrivacyLFP logic, whose features include support for disclosure purposes, real-time constructs, and self-reference via fixed points. To illustrate these features and demonstrate PrivacyLFP's utility, we present formalizations of a collection of clauses from these laws. Due to their size, our full formalizations of HIPAA and GLBA appear in a companion technical report. We discuss ambiguities in the laws that our formalizations revealed and sketch preliminary ideas for computer-assisted enforcement of such privacy policies.","PeriodicalId":74537,"journal":{"name":"Proceedings of the ACM Workshop on Privacy in the Electronic Society. ACM Workshop on Privacy in the Electronic Society","volume":"15 1","pages":"73-82"},"PeriodicalIF":0.0,"publicationDate":"2010-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82037486","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Anonymous blacklisting schemes enable online service providers to block future accesses from abusive users behind anonymizing networks, such as Tor, while preserving the privacy of all users, both abusive and non-abusive. Several such schemes exist in the literature, but all suffer from one of several faults: they rely on trusted parties that can collude to de-anonymize users, they scale poorly with the number of blacklisted users, or they place a very high computational load on the trusted parties. We introduce Jack, an efficient, scalable anonymous blacklisting scheme based on cryptographic accumulators. Compared to the previous efficient schemes, Jack significantly reduces the communication and computation costs required of trusted parties while also weakening the trust placed in these parties. Compared with schemes with no trusted parties, Jack enjoys constant scaling with respect to the number of blacklisted users, imposing dramatically reduced computation and communication costs for service providers. Jack is provably secure in the random oracle model, and we demonstrate its efficiency both analytically and experimentally.
{"title":"Jack: scalable accumulator-based nymble system","authors":"Zi Lin, Nicholas Hopper","doi":"10.1145/1866919.1866927","DOIUrl":"https://doi.org/10.1145/1866919.1866927","url":null,"abstract":"Anonymous blacklisting schemes enable online service providers to block future accesses from abusive users behind anonymizing networks, such as Tor, while preserving the privacy of all users, both abusive and non-abusive. Several such schemes exist in the literature, but all suffer from one of several faults: they rely on trusted parties that can collude to de-anonymize users, they scale poorly with the number of blacklisted users, or they place a very high computational load on the trusted parties.\u0000 We introduce Jack, an efficient, scalable anonymous blacklisting scheme based on cryptographic accumulators. Compared to the previous efficient schemes, Jack significantly reduces the communication and computation costs required of trusted parties while also weakening the trust placed in these parties. Compared with schemes with no trusted parties, Jack enjoys constant scaling with respect to the number of blacklisted users, imposing dramatically reduced computation and communication costs for service providers. Jack is provably secure in the random oracle model, and we demonstrate its efficiency both analytically and experimentally.","PeriodicalId":74537,"journal":{"name":"Proceedings of the ACM Workshop on Privacy in the Electronic Society. ACM Workshop on Privacy in the Electronic Society","volume":"46 1","pages":"53-62"},"PeriodicalIF":0.0,"publicationDate":"2010-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83057140","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents empirical data on American Internet users' knowledge about and perceptions of Internet advertising techniques. We present the results of in-depth interviews and an online survey focusing on participants' views of online advertising and their ability to make decisions about privacy tradeoffs. We find users hold misconceptions about the purpose of cookies and the effects of clearing them. Only 11% of respondents understood the text description of NAI opt-out cookies, which are a self-help mechanism that enables user choice. 86% believe ads are tailored to websites they have visited in the past, but only 39% believe there are currently ads based on email content, and only 9% think it is ok to see ads based on email content as long as their email service is free. About 20% of participants want the benefits of targeted advertising, but 64% find the idea invasive, and we see signs of a possible chilling effect with 40% self-reporting they would change their online behavior if advertisers were collecting data. We find a gap between people's willingness to pay to protect their privacy and their willingness to accept discounts in exchange for private information. 69% believe privacy is a right and 61% think it is "extortion" to pay to keep their data private. Only 11% say they would pay to avoid ads. We find participants are comfortable with the idea that advertising supports free online content, but they do not believe their data are part of that exchange.
{"title":"Americans' attitudes about internet behavioral advertising practices","authors":"Aleecia M. McDonald, L. Cranor","doi":"10.1145/1866919.1866929","DOIUrl":"https://doi.org/10.1145/1866919.1866929","url":null,"abstract":"This paper presents empirical data on American Internet users' knowledge about and perceptions of Internet advertising techniques. We present the results of in-depth interviews and an online survey focusing on participants' views of online advertising and their ability to make decisions about privacy tradeoffs. We find users hold misconceptions about the purpose of cookies and the effects of clearing them. Only 11% of respondents understood the text description of NAI opt-out cookies, which are a self-help mechanism that enables user choice. 86% believe ads are tailored to websites they have visited in the past, but only 39% believe there are currently ads based on email content, and only 9% think it is ok to see ads based on email content as long as their email service is free. About 20% of participants want the benefits of targeted advertising, but 64% find the idea invasive, and we see signs of a possible chilling effect with 40% self-reporting they would change their online behavior if advertisers were collecting data. We find a gap between people's willingness to pay to protect their privacy and their willingness to accept discounts in exchange for private information. 69% believe privacy is a right and 61% think it is \"extortion\" to pay to keep their data private. Only 11% say they would pay to avoid ads. We find participants are comfortable with the idea that advertising supports free online content, but they do not believe their data are part of that exchange.","PeriodicalId":74537,"journal":{"name":"Proceedings of the ACM Workshop on Privacy in the Electronic Society. ACM Workshop on Privacy in the Electronic Society","volume":"17 1","pages":"63-72"},"PeriodicalIF":0.0,"publicationDate":"2010-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88023268","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In many end-to-end voting systems there is a single entity that produces each ballot. This entity can be the printer in the case of paper ballots, or the voting machine in the case of an electronic interface. While not able to change election results, this powerful entity has access to confidential information and can reveal selections made by the voters which, along with the voter's identities, can compromise the secrecy of the ballot. We propose ClearVote, a new end-to-end voting system that has no single entity that can reveal ballot selections. The ClearVote ballot has three sheets of transparent plastic, each sheet coming from a different printer. Assuming no two printers collude, there is no single entity with enough knowledge to reveal ballot selections.
{"title":"Clearvote: an end-to-end voting system that distributes privacy between printers","authors":"Stefan Popoveniuc, R. Carback","doi":"10.1145/1866919.1866937","DOIUrl":"https://doi.org/10.1145/1866919.1866937","url":null,"abstract":"In many end-to-end voting systems there is a single entity that produces each ballot. This entity can be the printer in the case of paper ballots, or the voting machine in the case of an electronic interface. While not able to change election results, this powerful entity has access to confidential information and can reveal selections made by the voters which, along with the voter's identities, can compromise the secrecy of the ballot.\u0000 We propose ClearVote, a new end-to-end voting system that has no single entity that can reveal ballot selections. The ClearVote ballot has three sheets of transparent plastic, each sheet coming from a different printer. Assuming no two printers collude, there is no single entity with enough knowledge to reveal ballot selections.","PeriodicalId":74537,"journal":{"name":"Proceedings of the ACM Workshop on Privacy in the Electronic Society. ACM Workshop on Privacy in the Electronic Society","volume":"12 1","pages":"119-122"},"PeriodicalIF":0.0,"publicationDate":"2010-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85733384","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Iasonas Polakis, Georgios Kontaxis, S. Antonatos, Eleni Gessiou, Thanasis Petsas, E. Markatos
Social networking is one of the most popular Internet activities with millions of members from around the world. However, users are unaware of the privacy risks involved. Even if they protect their private information, their name is enough to be used for malicious purposes. In this paper we demonstrate and evaluate how names extracted from social networks can be used to harvest email addresses as a first step for personalized phishing campaigns. Our blind harvesting technique uses names collected from the Facebook and Twitter networks as query terms for the Google search engine, and was able to harvest almost 9 million unique email addresses. We compare our technique with other harvesting methodologies, such as crawling the World Wide Web and dictionary attacks, and show that our approach is more scalable and efficient than the other techniques. We also present three targeted harvesting, techniques that aim to collect email addresses coupled with personal information for the creation of personalized phishing emails. By using information available in Twitter to narrow down the search space and, by utilizing the Facebook email search functionality, we are able to successfully map 43.4% of the user profiles to their actual email address. Furthermore, we harvest profiles from Google Buzz, 40% of whom provide a direct mapping to valid Gmail addresses.
{"title":"Using social networks to harvest email addresses","authors":"Iasonas Polakis, Georgios Kontaxis, S. Antonatos, Eleni Gessiou, Thanasis Petsas, E. Markatos","doi":"10.1145/1866919.1866922","DOIUrl":"https://doi.org/10.1145/1866919.1866922","url":null,"abstract":"Social networking is one of the most popular Internet activities with millions of members from around the world. However, users are unaware of the privacy risks involved. Even if they protect their private information, their name is enough to be used for malicious purposes. In this paper we demonstrate and evaluate how names extracted from social networks can be used to harvest email addresses as a first step for personalized phishing campaigns. Our blind harvesting technique uses names collected from the Facebook and Twitter networks as query terms for the Google search engine, and was able to harvest almost 9 million unique email addresses. We compare our technique with other harvesting methodologies, such as crawling the World Wide Web and dictionary attacks, and show that our approach is more scalable and efficient than the other techniques. We also present three targeted harvesting, techniques that aim to collect email addresses coupled with personal information for the creation of personalized phishing emails. By using information available in Twitter to narrow down the search space and, by utilizing the Facebook email search functionality, we are able to successfully map 43.4% of the user profiles to their actual email address. Furthermore, we harvest profiles from Google Buzz, 40% of whom provide a direct mapping to valid Gmail addresses.","PeriodicalId":74537,"journal":{"name":"Proceedings of the ACM Workshop on Privacy in the Electronic Society. ACM Workshop on Privacy in the Electronic Society","volume":"30 1","pages":"11-20"},"PeriodicalIF":0.0,"publicationDate":"2010-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74535069","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In a recent interview, Whitfield Diffie argued that "the whole point of cloud computing is economy" and while it is possible in principle for "computation to be done on encrypted data, [...] current techniques would more than undo the economy gained by the outsourcing and show little sign of becoming practical". Here we explore whether this is truly the case and quantify just how expensive it is to secure computing in untrusted, potentially curious clouds. We start by looking at the economics of computing in general and clouds in particular. Specifically, we derive the end-to-end cost of a CPU cycle in various environments and show that its cost lies between 0.5 picocents in efficient clouds and nearly 27 picocents for small enterprises (1 picocent = $1 x 10-14), values validated against current pricing. We then explore the cost of common cryptography primitives as well as the viability of their deployment for cloud security purposes. We conclude that Diffie was correct. Securing outsourced data and computation against untrusted clouds is indeed costlier than the associated savings, with outsourcing mechanisms up to several orders of magnitudes costlier than their non-outsourced locally run alternatives.
在最近的一次采访中,Whitfield Diffie认为“云计算的全部意义在于经济”,虽然原则上“在加密数据上进行计算”是可能的,但……目前的技术不仅会抵消外包所带来的经济效益,而且几乎没有迹象表明它将变得实用。”在这里,我们将探讨这种情况是否属实,并量化在不可信的、可能令人好奇的云中保护计算的成本。我们从总体上看计算经济,特别是云计算经济开始。具体来说,我们得出了各种环境中CPU周期的端到端成本,并表明其成本在高效云中为0.5皮cent,在小型企业中为近27皮cent(1皮cent = 1 x 10-14美元),这些值根据当前定价进行了验证。然后,我们将探讨通用加密原语的成本以及部署它们用于云安全目的的可行性。我们的结论是迪菲是正确的。保护外包数据和计算免受不可信云的影响的成本确实比相关的节省要高,外包机制比非外包的本地运行替代方案的成本要高几个数量级。
{"title":"On securing untrusted clouds with cryptography","authors":"Yao Chen, R. Sion","doi":"10.1145/1866919.1866935","DOIUrl":"https://doi.org/10.1145/1866919.1866935","url":null,"abstract":"In a recent interview, Whitfield Diffie argued that \"the whole point of cloud computing is economy\" and while it is possible in principle for \"computation to be done on encrypted data, [...] current techniques would more than undo the economy gained by the outsourcing and show little sign of becoming practical\". Here we explore whether this is truly the case and quantify just how expensive it is to secure computing in untrusted, potentially curious clouds.\u0000 We start by looking at the economics of computing in general and clouds in particular. Specifically, we derive the end-to-end cost of a CPU cycle in various environments and show that its cost lies between 0.5 picocents in efficient clouds and nearly 27 picocents for small enterprises (1 picocent = $1 x 10-14), values validated against current pricing.\u0000 We then explore the cost of common cryptography primitives as well as the viability of their deployment for cloud security purposes. We conclude that Diffie was correct. Securing outsourced data and computation against untrusted clouds is indeed costlier than the associated savings, with outsourcing mechanisms up to several orders of magnitudes costlier than their non-outsourced locally run alternatives.","PeriodicalId":74537,"journal":{"name":"Proceedings of the ACM Workshop on Privacy in the Electronic Society. ACM Workshop on Privacy in the Electronic Society","volume":"331 1","pages":"109-114"},"PeriodicalIF":0.0,"publicationDate":"2010-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87874686","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}