CAPTCHA is now almost a standard security technology, and has found widespread application in commercial websites. Usability and robustness are two fundamental issues with CAPTCHA, and they often interconnect with each other. This paper discusses usability issues that should be considered and addressed in the design of CAPTCHAs. Some of these issues are intuitive, but some others have subtle implications for robustness (or security). A simple but novel framework for examining CAPTCHA usability is also proposed.
{"title":"Usability of CAPTCHAs or usability issues in CAPTCHA design","authors":"Jeff Yan, A. E. Ahmad","doi":"10.1145/1408664.1408671","DOIUrl":"https://doi.org/10.1145/1408664.1408671","url":null,"abstract":"CAPTCHA is now almost a standard security technology, and has found widespread application in commercial websites. Usability and robustness are two fundamental issues with CAPTCHA, and they often interconnect with each other. This paper discusses usability issues that should be considered and addressed in the design of CAPTCHAs. Some of these issues are intuitive, but some others have subtle implications for robustness (or security). A simple but novel framework for examining CAPTCHA usability is also proposed.","PeriodicalId":273244,"journal":{"name":"Symposium On Usable Privacy and Security","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126270604","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
R. Werlinger, K. Hawkey, Kasia Muldner, P. Jaferian, K. Beznosov
An intrusion detection system (IDS) can be a key component of security incident response within organizations. Traditionally, intrusion detection research has focused on improving the accuracy of IDSs, but recent work has recognized the need to support the security practitioners who receive the IDS alarms and investigate suspected incidents. To examine the challenges associated with deploying and maintaining an IDS, we analyzed 9 interviews with IT security practitioners who have worked with IDSs and performed participatory observations in an organization deploying a network IDS. We had three main research questions: (1) What do security practitioners expect from an IDS?; (2) What difficulties do they encounter when installing and configuring an IDS?; and (3) How can the usability of an IDS be improved? Our analysis reveals both positive and negative perceptions that security practitioners have for IDSs, as well as several issues encountered during the initial stages of IDS deployment. In particular, practitioners found it difficult to decide where to place the IDS and how to best configure it for use within a distributed environment with multiple stakeholders. We provide recommendations for tool support to help mitigate these challenges and reduce the effort of introducing an IDS within an organization.
{"title":"The challenges of using an intrusion detection system: is it worth the effort?","authors":"R. Werlinger, K. Hawkey, Kasia Muldner, P. Jaferian, K. Beznosov","doi":"10.1145/1408664.1408679","DOIUrl":"https://doi.org/10.1145/1408664.1408679","url":null,"abstract":"An intrusion detection system (IDS) can be a key component of security incident response within organizations. Traditionally, intrusion detection research has focused on improving the accuracy of IDSs, but recent work has recognized the need to support the security practitioners who receive the IDS alarms and investigate suspected incidents. To examine the challenges associated with deploying and maintaining an IDS, we analyzed 9 interviews with IT security practitioners who have worked with IDSs and performed participatory observations in an organization deploying a network IDS. We had three main research questions: (1) What do security practitioners expect from an IDS?; (2) What difficulties do they encounter when installing and configuring an IDS?; and (3) How can the usability of an IDS be improved? Our analysis reveals both positive and negative perceptions that security practitioners have for IDSs, as well as several issues encountered during the initial stages of IDS deployment. In particular, practitioners found it difficult to decide where to place the IDS and how to best configure it for use within a distributed environment with multiple stakeholders. We provide recommendations for tool support to help mitigate these challenges and reduce the effort of introducing an IDS within an organization.","PeriodicalId":273244,"journal":{"name":"Symposium On Usable Privacy and Security","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126654981","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
An increasing number of people rely on secure websites to carry out their daily business. A survey conducted by Pew Internet states 42% of all internet users bank online. Considering the types of secure transactions being conducted, businesses are rigorously testing their sites for security flaws. In spite of this testing, some design flaws still remain that prevent secure usage. In this paper, we examine the prevalence of user-visible security design flaws by looking at sites from 214 U.S. financial institutions. We specifically chose financial websites because of their high security requirements. We found a number of flaws that may lead users to make bad security decisions, even if they are knowledgeable about security and exhibit proper browser use consistent with the site's security policies. To our surprise, these design flaws were widespread. We found that 76% of the sites in our survey suffered from at least one design flaw. This indicates that these flaws are not widely understood, even by experts who are responsible for web security. Finally, we present our methodology for testing websites and discuss how it can help systematically discover user-visible security design flaws.
{"title":"Analyzing websites for user-visible security design flaws","authors":"L. Falk, A. Prakash, Kevin Borders","doi":"10.1145/1408664.1408680","DOIUrl":"https://doi.org/10.1145/1408664.1408680","url":null,"abstract":"An increasing number of people rely on secure websites to carry out their daily business. A survey conducted by Pew Internet states 42% of all internet users bank online. Considering the types of secure transactions being conducted, businesses are rigorously testing their sites for security flaws. In spite of this testing, some design flaws still remain that prevent secure usage. In this paper, we examine the prevalence of user-visible security design flaws by looking at sites from 214 U.S. financial institutions. We specifically chose financial websites because of their high security requirements. We found a number of flaws that may lead users to make bad security decisions, even if they are knowledgeable about security and exhibit proper browser use consistent with the site's security policies. To our surprise, these design flaws were widespread. We found that 76% of the sites in our survey suffered from at least one design flaw. This indicates that these flaws are not widely understood, even by experts who are responsible for web security. Finally, we present our methodology for testing websites and discuss how it can help systematically discover user-visible security design flaws.","PeriodicalId":273244,"journal":{"name":"Symposium On Usable Privacy and Security","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134415616","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
One common practice in relation to alphanumeric passwords is to write them down or share them with a trusted friend or colleague. Graphical password schemes often claim the advantage that they are significantly more secure with respect to both verbal disclosure and writing down. We investigated the reality of this claim in relation to the Passfaces graphical password scheme. By collecting a corpus of naturalistic descriptions of a set of 45 faces, we explored participants' ability to associate descriptions with faces across three conditions in which the decoy faces were selected: (1) at random; (2) on the basis of their visual similarity to the target face; and (3) on the basis of the similarity of the verbal descriptions of the decoy faces to the target face. Participants were found to perform significantly worse when presented with visual and verbally grouped decoys, suggesting that Passfaces can be further secured for description. Subtle differences in both the nature of male and female descriptions, and male and female performance were also observed.
{"title":"Securing passfaces for description","authors":"Paul Dunphy, James Nicholson, P. Olivier","doi":"10.1145/1408664.1408668","DOIUrl":"https://doi.org/10.1145/1408664.1408668","url":null,"abstract":"One common practice in relation to alphanumeric passwords is to write them down or share them with a trusted friend or colleague. Graphical password schemes often claim the advantage that they are significantly more secure with respect to both verbal disclosure and writing down. We investigated the reality of this claim in relation to the Passfaces graphical password scheme. By collecting a corpus of naturalistic descriptions of a set of 45 faces, we explored participants' ability to associate descriptions with faces across three conditions in which the decoy faces were selected: (1) at random; (2) on the basis of their visual similarity to the target face; and (3) on the basis of the similarity of the verbal descriptions of the decoy faces to the target face. Participants were found to perform significantly worse when presented with visual and verbally grouped decoys, suggesting that Passfaces can be further secured for description. Subtle differences in both the nature of male and female descriptions, and male and female performance were also observed.","PeriodicalId":273244,"journal":{"name":"Symposium On Usable Privacy and Security","volume":"83 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130399768","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Security questions (or challenge questions) are commonly used to authenticate users who have lost their passwords. We examined the password retrieval mechanisms for a number of personal banking websites, and found that many of them rely in part on security questions with serious usability and security weaknesses. We discuss patterns in the security questions we observed. We argue that today's personal security questions owe their strength to the hardness of an information-retrieval problem. However, as personal information becomes ubiquitously available online, the hardness of this problem, and security provided by such questions, will likely diminish over time. We supplement our survey of bank security questions with a small user study that supplies some context for how such questions are used in practice.
{"title":"Personal knowledge questions for fallback authentication: security questions in the era of Facebook","authors":"A. Rabkin","doi":"10.1145/1408664.1408667","DOIUrl":"https://doi.org/10.1145/1408664.1408667","url":null,"abstract":"Security questions (or challenge questions) are commonly used to authenticate users who have lost their passwords. We examined the password retrieval mechanisms for a number of personal banking websites, and found that many of them rely in part on security questions with serious usability and security weaknesses. We discuss patterns in the security questions we observed. We argue that today's personal security questions owe their strength to the hardness of an information-retrieval problem. However, as personal information becomes ubiquitously available online, the hardness of this problem, and security provided by such questions, will likely diminish over time. We supplement our survey of bank security questions with a small user study that supplies some context for how such questions are used in practice.","PeriodicalId":273244,"journal":{"name":"Symposium On Usable Privacy and Security","volume":"114 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123131797","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Instant messaging is a prevalent form of communication across the Internet, yet most instant messaging services provide little security against eavesdroppers or impersonators. There are a variety of existing systems that aim to solve this problem, but the one that provides the highest level of privacy is Off-the-Record Messaging (OTR), which aims to give instant messaging conversations the level of privacy available in a face-to-face conversation. In the most recent redesign of OTR, as well as increasing the security of the protocol, one of the goals of the designers was to make OTR easier to use, without users needing to understand details of computer security such as keys or fingerprints. To determine if this design goal has been met, we conducted a user study of the OTR plugin for the Pidgin instant messaging client using the think aloud method. As a result of this study we have identified a variety of usability flaws remaining in the design of OTR. These flaws that we have discovered have the ability to cause confusion, make the program unusable, and even decrease the level of security to users of OTR. We discuss how these errors can be repaired, as well as identify an area that requires further research to improve its usability.
{"title":"A user study of off-the-record messaging","authors":"R. Stedman, Kayo Yoshida, I. Goldberg","doi":"10.1145/1408664.1408678","DOIUrl":"https://doi.org/10.1145/1408664.1408678","url":null,"abstract":"Instant messaging is a prevalent form of communication across the Internet, yet most instant messaging services provide little security against eavesdroppers or impersonators. There are a variety of existing systems that aim to solve this problem, but the one that provides the highest level of privacy is Off-the-Record Messaging (OTR), which aims to give instant messaging conversations the level of privacy available in a face-to-face conversation. In the most recent redesign of OTR, as well as increasing the security of the protocol, one of the goals of the designers was to make OTR easier to use, without users needing to understand details of computer security such as keys or fingerprints.\u0000 To determine if this design goal has been met, we conducted a user study of the OTR plugin for the Pidgin instant messaging client using the think aloud method. As a result of this study we have identified a variety of usability flaws remaining in the design of OTR. These flaws that we have discovered have the ability to cause confusion, make the program unusable, and even decrease the level of security to users of OTR. We discuss how these errors can be repaired, as well as identify an area that requires further research to improve its usability.","PeriodicalId":273244,"journal":{"name":"Symposium On Usable Privacy and Security","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132338074","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Eiji Hayashi, Rachna Dhamija, Nicolas Christin, A. Perrig
In this paper, we propose and evaluate Use Your Illusion, a novel mechanism for user authentication that is secure and usable regardless of the size of the device on which it is used. Our system relies on the human ability to recognize a degraded version of a previously seen image. We illustrate how distorted images can be used to maintain the usability of graphical password schemes while making them more resilient to social engineering or observation attacks. Because it is difficult to mentally "revert" a degraded image, without knowledge of the original image, our scheme provides a strong line of defense against impostor access, while preserving the desirable memorability properties of graphical password schemes. Using low-fidelity tests to aid in the design, we implement prototypes of Use Your Illusion as i) an Ajax-based web service and ii) on Nokia N70 cellular phones. We conduct a between-subjects usability study of the cellular phone prototype with a total of 99 participants in two experiments. We demonstrate that, regardless of their age or gender, users are very skilled at recognizing degraded versions of self-chosen images, even on small displays and after time periods of one month. Our results indicate that graphical passwords with distorted images can achieve equivalent error rates to those using traditional images, but only when the original image is known.
在本文中,我们提出并评估了Use Your Illusion,这是一种新的用户身份验证机制,无论使用它的设备大小如何,它都是安全和可用的。我们的系统依赖于人类识别先前看到的图像的降级版本的能力。我们说明了如何使用扭曲的图像来保持图形密码方案的可用性,同时使它们对社会工程或观察攻击更具弹性。由于在不了解原始图像的情况下,很难在心理上“恢复”降级的图像,因此我们的方案提供了一个强大的防御冒名顶替者访问的防线,同时保留了图形密码方案所需的可记忆性。使用低保真度测试来辅助设计,我们将Use Your Illusion的原型实现为i)基于ajax的web服务和ii)在Nokia N70手机上。我们对手机原型进行了受试者间可用性研究,共有99人参与了两个实验。我们证明,无论他们的年龄或性别,用户都非常擅长识别自己选择的图像的降级版本,即使是在小显示器上和一个月后。我们的研究结果表明,具有扭曲图像的图形密码可以达到与使用传统图像的密码相同的错误率,但只有在原始图像已知的情况下。
{"title":"Use Your Illusion: secure authentication usable anywhere","authors":"Eiji Hayashi, Rachna Dhamija, Nicolas Christin, A. Perrig","doi":"10.1145/1408664.1408670","DOIUrl":"https://doi.org/10.1145/1408664.1408670","url":null,"abstract":"In this paper, we propose and evaluate Use Your Illusion, a novel mechanism for user authentication that is secure and usable regardless of the size of the device on which it is used. Our system relies on the human ability to recognize a degraded version of a previously seen image. We illustrate how distorted images can be used to maintain the usability of graphical password schemes while making them more resilient to social engineering or observation attacks. Because it is difficult to mentally \"revert\" a degraded image, without knowledge of the original image, our scheme provides a strong line of defense against impostor access, while preserving the desirable memorability properties of graphical password schemes.\u0000 Using low-fidelity tests to aid in the design, we implement prototypes of Use Your Illusion as i) an Ajax-based web service and ii) on Nokia N70 cellular phones. We conduct a between-subjects usability study of the cellular phone prototype with a total of 99 participants in two experiments. We demonstrate that, regardless of their age or gender, users are very skilled at recognizing degraded versions of self-chosen images, even on small displays and after time periods of one month. Our results indicate that graphical passwords with distorted images can achieve equivalent error rates to those using traditional images, but only when the original image is known.","PeriodicalId":273244,"journal":{"name":"Symposium On Usable Privacy and Security","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116721601","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Click-based graphical passwords, which involve clicking a set of user-selected points, have been proposed as a usable alternative to text passwords. We conducted two user studies: an initial lab study to revisit these usability claims, explore for the first time the impact on usability of a wide-range of images, and gather information about the points selected by users; and a large-scale field study to examine how click-based graphical passwords work in practice. No such prior field studies have been reported in the literature. We found significant differences in the usability results of the two studies, providing empirical evidence that relying solely on lab studies for security interfaces can be problematic. We also present a first look at whether interference from having multiple graphical passwords affects usability and whether more memorable passwords are necessarily weaker in terms of security.
{"title":"A second look at the usability of click-based graphical passwords","authors":"S. Chiasson, R. Biddle, P. V. Oorschot","doi":"10.1145/1280680.1280682","DOIUrl":"https://doi.org/10.1145/1280680.1280682","url":null,"abstract":"Click-based graphical passwords, which involve clicking a set of user-selected points, have been proposed as a usable alternative to text passwords. We conducted two user studies: an initial lab study to revisit these usability claims, explore for the first time the impact on usability of a wide-range of images, and gather information about the points selected by users; and a large-scale field study to examine how click-based graphical passwords work in practice. No such prior field studies have been reported in the literature. We found significant differences in the usability results of the two studies, providing empirical evidence that relying solely on lab studies for security interfaces can be problematic. We also present a first look at whether interference from having multiple graphical passwords affects usability and whether more memorable passwords are necessarily weaker in terms of security.","PeriodicalId":273244,"journal":{"name":"Symposium On Usable Privacy and Security","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123049218","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Context-sensitive guidance (CSG) can help users make better security decisions. Applications with CSG ask the user to provide relevant context information. Based on such information, these applications then decide or suggest an appropriate course of action. However, users often deem security dialogs irrelevant to the tasks they are performing and try to evade them. This paper contributes two new techniques for hardening CSG against automatic and false user answers. Polymorphic dialogs continuously change the form of required user inputs and intentionally delay the latter, forcing users to pay attention to security decisions. Audited dialogs thwart false user answers by (1) warning users that their answers will be forwarded to auditors, and (2) allowing auditors to quarantine users who provide unjustified answers. We implemented CSG against email-borne viruses on the Thunderbird email agent. One version, CSG-PD, includes CSG and polymorphic dialogs. Another version, CSG-PAD, includes CSG and both polymorphic and audited dialogs. In user studies, we found that untrained users accept significantly less unjustified risks with CSG-PD than with conventional dialogs. Moreover, they accept significantly less unjustified risks with CSG-PAD than with CSG-PD. CSG-PD and CSG-PAD have insignificant effect on acceptance of justified risks.
{"title":"Improving security decisions with polymorphic and audited dialogs","authors":"J. Brustoloni, Ricardo Villamarín-Salomón","doi":"10.1145/1280680.1280691","DOIUrl":"https://doi.org/10.1145/1280680.1280691","url":null,"abstract":"Context-sensitive guidance (CSG) can help users make better security decisions. Applications with CSG ask the user to provide relevant context information. Based on such information, these applications then decide or suggest an appropriate course of action. However, users often deem security dialogs irrelevant to the tasks they are performing and try to evade them. This paper contributes two new techniques for hardening CSG against automatic and false user answers. Polymorphic dialogs continuously change the form of required user inputs and intentionally delay the latter, forcing users to pay attention to security decisions. Audited dialogs thwart false user answers by (1) warning users that their answers will be forwarded to auditors, and (2) allowing auditors to quarantine users who provide unjustified answers. We implemented CSG against email-borne viruses on the Thunderbird email agent. One version, CSG-PD, includes CSG and polymorphic dialogs. Another version, CSG-PAD, includes CSG and both polymorphic and audited dialogs. In user studies, we found that untrained users accept significantly less unjustified risks with CSG-PD than with conventional dialogs. Moreover, they accept significantly less unjustified risks with CSG-PAD than with CSG-PD. CSG-PD and CSG-PAD have insignificant effect on acceptance of justified risks.","PeriodicalId":273244,"journal":{"name":"Symposium On Usable Privacy and Security","volume":"70 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125878647","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We develop a model to identify the most likely regions for users to click in order to create graphical passwords in the PassPoints system. A PassPoints password is a sequence of points, chosen by a user in an image that is displayed on the screen. Our model predicts probabilities of likely click points; this enables us to predict the entropy of a click point in a graphical password for a given image. The model allows us to evaluate automatically whether a given image is well suited for the PassPoints system, and to analyze possible dictionary attacks against the system. We compare the predictions provided by our model to results of experiments involving human users. At this stage, our model and the experiments are small and limited; but they show that user choice can be modeled and that expansions of the model and the experiments are a promising direction of research.
{"title":"Modeling user choice in the PassPoints graphical password scheme","authors":"A. Dirik, N. Memon, J. Birget","doi":"10.1145/1280680.1280684","DOIUrl":"https://doi.org/10.1145/1280680.1280684","url":null,"abstract":"We develop a model to identify the most likely regions for users to click in order to create graphical passwords in the PassPoints system. A PassPoints password is a sequence of points, chosen by a user in an image that is displayed on the screen. Our model predicts probabilities of likely click points; this enables us to predict the entropy of a click point in a graphical password for a given image. The model allows us to evaluate automatically whether a given image is well suited for the PassPoints system, and to analyze possible dictionary attacks against the system. We compare the predictions provided by our model to results of experiments involving human users. At this stage, our model and the experiments are small and limited; but they show that user choice can be modeled and that expansions of the model and the experiments are a promising direction of research.","PeriodicalId":273244,"journal":{"name":"Symposium On Usable Privacy and Security","volume":"67 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114488722","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}