We design, implement, and evaluate GeoPass: an interface for digital map-based authentication where a user chooses a place as his or her password (i.e., a "location-password"). We conducted a multi-session in-lab/at-home user study to evaluate the usability, memorability, and security of location-passwords created with GeoPass. The results of our user study found that 97% of users were able to remember their location-password over the span of 8-9 days and most without any failed login attempts. Users generally welcomed GeoPass; all of the users who completed the study reported that they would at least consider using GeoPass for some of their accounts. We also perform an in-depth usability and security analysis of location-passwords. Our security analysis includes the effect of information that could be gleaned from social engineering. The results of our security analysis show that location-passwords created with GeoPass can have reasonable security against online attacks, even when accounting for social engineering attacks. Based on our results, we suggest GeoPass would be most appropriate in contexts where logins occur infrequently, e.g., as an alternative to secondary authentication methods used for password resets, or for infrequently used online accounts.
{"title":"Usability and security evaluation of GeoPass: a geographic location-password scheme","authors":"Julie Thorpe, Brent MacRae, Amirali Salehi-Abari","doi":"10.1145/2501604.2501618","DOIUrl":"https://doi.org/10.1145/2501604.2501618","url":null,"abstract":"We design, implement, and evaluate GeoPass: an interface for digital map-based authentication where a user chooses a place as his or her password (i.e., a \"location-password\"). We conducted a multi-session in-lab/at-home user study to evaluate the usability, memorability, and security of location-passwords created with GeoPass. The results of our user study found that 97% of users were able to remember their location-password over the span of 8-9 days and most without any failed login attempts. Users generally welcomed GeoPass; all of the users who completed the study reported that they would at least consider using GeoPass for some of their accounts. We also perform an in-depth usability and security analysis of location-passwords. Our security analysis includes the effect of information that could be gleaned from social engineering. The results of our security analysis show that location-passwords created with GeoPass can have reasonable security against online attacks, even when accounting for social engineering attacks. Based on our results, we suggest GeoPass would be most appropriate in contexts where logins occur infrequently, e.g., as an alternative to secondary authentication methods used for password resets, or for infrequently used online accounts.","PeriodicalId":273244,"journal":{"name":"Symposium On Usable Privacy and Security","volume":"78 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117127846","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Rebecca Balebako, Jaeyeon Jung, Wei Lu, L. Cranor, Carolyn Nguyen
Today's smartphone applications expect users to make decisions about what information they are willing to share, but fail to provide sufficient feedback about which privacy-sensitive information is leaving the phone, as well as how frequently and with which entities it is being shared. Such feedback can improve users' understanding of potential privacy leakages through apps that collect information about them in an unexpected way. Through a qualitative lab study with 19 participants, we first discuss misconceptions that smartphone users currently have with respect to two popular game applications that frequently collect the phone's current location and share it with multiple third parties. To measure the gap between users' understanding and actual privacy leakages, we use two types of interfaces that we developed: just-in-time notifications that appear the moment data is shared and a visualization that summarizes the shared data. We then report on participants' perceived benefits and concerns regarding data sharing with smartphone applications after experiencing notifications and having viewed the visualization. We conclude with a discussion on how heightened awareness of users and usable controls can mitigate some of these concerns.
{"title":"\"Little brothers watching you\": raising awareness of data leaks on smartphones","authors":"Rebecca Balebako, Jaeyeon Jung, Wei Lu, L. Cranor, Carolyn Nguyen","doi":"10.1145/2501604.2501616","DOIUrl":"https://doi.org/10.1145/2501604.2501616","url":null,"abstract":"Today's smartphone applications expect users to make decisions about what information they are willing to share, but fail to provide sufficient feedback about which privacy-sensitive information is leaving the phone, as well as how frequently and with which entities it is being shared. Such feedback can improve users' understanding of potential privacy leakages through apps that collect information about them in an unexpected way. Through a qualitative lab study with 19 participants, we first discuss misconceptions that smartphone users currently have with respect to two popular game applications that frequently collect the phone's current location and share it with multiple third parties. To measure the gap between users' understanding and actual privacy leakages, we use two types of interfaces that we developed: just-in-time notifications that appear the moment data is shared and a visualization that summarizes the shared data. We then report on participants' perceived benefits and concerns regarding data sharing with smartphone applications after experiencing notifications and having viewed the visualization. We conclude with a discussion on how heightened awareness of users and usable controls can mitigate some of these concerns.","PeriodicalId":273244,"journal":{"name":"Symposium On Usable Privacy and Security","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134146903","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cristian Bravo-Lillo, Saranga Komanduri, L. Cranor, R. Reeder, Manya Sleeper, J. Downs, Stuart E. Schechter
We designed and tested attractors for computer security dialogs: user-interface modifications used to draw users' attention to the most important information for making decisions. Some of these modifications were purely visual, while others temporarily inhibited potentially-dangerous behaviors to redirect users' attention to salient information. We conducted three between-subjects experiments to test the effectiveness of the attractors. In the first two experiments, we sent participants to perform a task on what appeared to be a third-party site that required installation of a browser plugin. We presented them with what appeared to be an installation dialog from their operating system. Participants who saw dialogs that employed inhibitive attractors were significantly less likely than those in the control group to ignore clues that installing this software might be harmful. In the third experiment, we attempted to habituate participants to dialogs that they knew were part of the experiment. We used attractors to highlight a field that was of no value during habituation trials and contained critical information after the habituation period. Participants exposed to inhibitive attractors were two to three times more likely to make an informed decision than those in the control condition.
{"title":"Your attention please: designing security-decision UIs to make genuine risks harder to ignore","authors":"Cristian Bravo-Lillo, Saranga Komanduri, L. Cranor, R. Reeder, Manya Sleeper, J. Downs, Stuart E. Schechter","doi":"10.1145/2501604.2501610","DOIUrl":"https://doi.org/10.1145/2501604.2501610","url":null,"abstract":"We designed and tested attractors for computer security dialogs: user-interface modifications used to draw users' attention to the most important information for making decisions. Some of these modifications were purely visual, while others temporarily inhibited potentially-dangerous behaviors to redirect users' attention to salient information. We conducted three between-subjects experiments to test the effectiveness of the attractors.\u0000 In the first two experiments, we sent participants to perform a task on what appeared to be a third-party site that required installation of a browser plugin. We presented them with what appeared to be an installation dialog from their operating system. Participants who saw dialogs that employed inhibitive attractors were significantly less likely than those in the control group to ignore clues that installing this software might be harmful.\u0000 In the third experiment, we attempted to habituate participants to dialogs that they knew were part of the experiment. We used attractors to highlight a field that was of no value during habituation trials and contained critical information after the habituation period. Participants exposed to inhibitive attractors were two to three times more likely to make an informed decision than those in the control condition.","PeriodicalId":273244,"journal":{"name":"Symposium On Usable Privacy and Security","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126995660","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
P. Leon, Blase Ur, Yang Wang, Manya Sleeper, Rebecca Balebako, Richard Shay, Lujo Bauer, Mihai Christodorescu, L. Cranor
Much of the debate surrounding online behavioral advertising (OBA) has centered on how to provide users with notice and choice. An important element left unexplored is how advertising companies' privacy practices affect users' attitudes toward data sharing. We present the results of a 2,912-participant online study investigating how facets of privacy practices---data retention, access to collected data, and scope of use---affect users' willingness to allow the collection of behavioral data. We asked participants to visit a health website, explained OBA to them, and outlined policies governing data collection for OBA purposes. These policies varied by condition. We then asked participants about their willingness to permit the collection of 30 types of information. We identified classes of information that most participants would not share, as well as classes that nearly half of participants would share. More restrictive data-retention and scope-of-use policies increased participants' willingness to allow data collection. In contrast, whether the data was collected on a well-known site and whether users could review and modify their data had minimal impact. We discuss public policy implications and improvements to user interfaces to align with users' privacy preferences.
{"title":"What matters to users?: factors that affect users' willingness to share information with online advertisers","authors":"P. Leon, Blase Ur, Yang Wang, Manya Sleeper, Rebecca Balebako, Richard Shay, Lujo Bauer, Mihai Christodorescu, L. Cranor","doi":"10.1145/2501604.2501611","DOIUrl":"https://doi.org/10.1145/2501604.2501611","url":null,"abstract":"Much of the debate surrounding online behavioral advertising (OBA) has centered on how to provide users with notice and choice. An important element left unexplored is how advertising companies' privacy practices affect users' attitudes toward data sharing. We present the results of a 2,912-participant online study investigating how facets of privacy practices---data retention, access to collected data, and scope of use---affect users' willingness to allow the collection of behavioral data. We asked participants to visit a health website, explained OBA to them, and outlined policies governing data collection for OBA purposes. These policies varied by condition. We then asked participants about their willingness to permit the collection of 30 types of information. We identified classes of information that most participants would not share, as well as classes that nearly half of participants would share. More restrictive data-retention and scope-of-use policies increased participants' willingness to allow data collection. In contrast, whether the data was collected on a well-known site and whether users could review and modify their data had minimal impact. We discuss public policy implications and improvements to user interfaces to align with users' privacy preferences.","PeriodicalId":273244,"journal":{"name":"Symposium On Usable Privacy and Security","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125306327","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Christopher Thompson, Maritza L. Johnson, Serge Egelman, D. Wagner, J. King
Smartphone applications pose interesting security problems because the same resources they use to enhance the user experience may also be used in ways that users might find objectionable. We performed a set of experiments to study whether attribution mechanisms could help users understand how smartphone applications access device resources. First, we performed an online survey and found that, as attribution mechanisms have become available on the Android platform, users notice and use them. Second, we designed new attribution mechanisms; a qualitative experiment suggested that our proposed mechanisms are intuitive to understand. Finally, we performed a laboratory experiment in which we simulated application misbehaviors to observe whether users equipped with our attribution mechanisms were able to identify the offending applications. Our results show that, for users who notice application misbehaviors, these attribution mechanisms are significantly more effective than the status quo.
{"title":"When it's better to ask forgiveness than get permission: attribution mechanisms for smartphone resources","authors":"Christopher Thompson, Maritza L. Johnson, Serge Egelman, D. Wagner, J. King","doi":"10.1145/2501604.2501605","DOIUrl":"https://doi.org/10.1145/2501604.2501605","url":null,"abstract":"Smartphone applications pose interesting security problems because the same resources they use to enhance the user experience may also be used in ways that users might find objectionable. We performed a set of experiments to study whether attribution mechanisms could help users understand how smartphone applications access device resources. First, we performed an online survey and found that, as attribution mechanisms have become available on the Android platform, users notice and use them. Second, we designed new attribution mechanisms; a qualitative experiment suggested that our proposed mechanisms are intuitive to understand. Finally, we performed a laboratory experiment in which we simulated application misbehaviors to observe whether users equipped with our attribution mechanisms were able to identify the offending applications. Our results show that, for users who notice application misbehaviors, these attribution mechanisms are significantly more effective than the status quo.","PeriodicalId":273244,"journal":{"name":"Symposium On Usable Privacy and Security","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131594579","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Scott Ruoti, Nathan Kim, Benjamin W. Burgon, Timothy W. van der Horst, K. Seamons
A common approach to designing usable security is to hide as many security details as possible from the user to reduce the amount of information and actions a user must encounter. This paper gives an overview of Pwm (Private Webmail), our secure webmail system that uses security overlays to integrate tightly with existing webmail services like Gmail. Pwm's security is mostly transparent, including automatic key management and automatic encryption. We describe a series of Pwm user studies indicating that while nearly all users can use the system without any prior training, the security details are so transparent that a small percentage of users mistakenly sent out unencrypted messages and some users are unsure whether they should trust Pwm. We then conducted user studies with an alternative prototype to Pwm that uses manual encryption. Surprisingly users were accepting of the extra steps of cutting and pasting ciphertext themselves. They avoided mistakes and had more trust in the system with manual encryption. Our results suggest that designers may want to reconsider manual encryption as a way to reduce transparency and foster greater trust.
{"title":"Confused Johnny: when automatic encryption leads to confusion and mistakes","authors":"Scott Ruoti, Nathan Kim, Benjamin W. Burgon, Timothy W. van der Horst, K. Seamons","doi":"10.1145/2501604.2501609","DOIUrl":"https://doi.org/10.1145/2501604.2501609","url":null,"abstract":"A common approach to designing usable security is to hide as many security details as possible from the user to reduce the amount of information and actions a user must encounter. This paper gives an overview of Pwm (Private Webmail), our secure webmail system that uses security overlays to integrate tightly with existing webmail services like Gmail. Pwm's security is mostly transparent, including automatic key management and automatic encryption. We describe a series of Pwm user studies indicating that while nearly all users can use the system without any prior training, the security details are so transparent that a small percentage of users mistakenly sent out unencrypted messages and some users are unsure whether they should trust Pwm. We then conducted user studies with an alternative prototype to Pwm that uses manual encryption. Surprisingly users were accepting of the extra steps of cutting and pasting ciphertext themselves. They avoided mistakes and had more trust in the system with manual encryption. Our results suggest that designers may want to reconsider manual encryption as a way to reduce transparency and foster greater trust.","PeriodicalId":273244,"journal":{"name":"Symposium On Usable Privacy and Security","volume":"70 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125442528","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
F. Schaub, Marcel Walch, Bastian Könings, M. Weber
Smartphones have emerged as a likely application area for graphical passwords, because they are easier to input on touchscreens than text passwords. Extensive research on graphical passwords and the capabilities of modern smartphones result in a complex design space for graphical password schemes on smartphones. We analyze and describe this design space in detail. In the process, we identify and highlight interrelations between usability and security characteristics, available design features, and smartphone capabilities. We further show the expressiveness and utility of the design space in the development of graphical passwords schemes by implementing five different existing graphical password schemes on one smartphone platform. We performed usability and shoulder surfing experiments with the implemented schemes to validate identified relations in the design space. From our results, we derive a number of helpful insights and guidelines for the design of graphical passwords.
{"title":"Exploring the design space of graphical passwords on smartphones","authors":"F. Schaub, Marcel Walch, Bastian Könings, M. Weber","doi":"10.1145/2501604.2501615","DOIUrl":"https://doi.org/10.1145/2501604.2501615","url":null,"abstract":"Smartphones have emerged as a likely application area for graphical passwords, because they are easier to input on touchscreens than text passwords. Extensive research on graphical passwords and the capabilities of modern smartphones result in a complex design space for graphical password schemes on smartphones. We analyze and describe this design space in detail. In the process, we identify and highlight interrelations between usability and security characteristics, available design features, and smartphone capabilities. We further show the expressiveness and utility of the design space in the development of graphical passwords schemes by implementing five different existing graphical password schemes on one smartphone platform. We performed usability and shoulder surfing experiments with the implemented schemes to validate identified relations in the design space. From our results, we derive a number of helpful insights and guidelines for the design of graphical passwords.","PeriodicalId":273244,"journal":{"name":"Symposium On Usable Privacy and Security","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132885232","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Eiji Hayashi, Sauvik Das, Shahriyar Amini, Jason I. Hong, Ian Oakley
We introduce context-aware scalable authentication (CASA) as a way of balancing security and usability for authentication. Our core idea is to choose an appropriate form of active authentication (e.g., typing a PIN) based on the combination of multiple passive factors (e.g., a user's current location) for authentication. We provide a probabilistic framework for dynamically selecting an active authentication scheme that satisfies a specified security requirement given passive factors. We also present the results of three user studies evaluating the feasibility and users' receptiveness of our concept. Our results suggest that location data has good potential as a passive factor, and that users can reduce up to 68% of active authentications when using an implementation of CASA, compared to always using fixed active authentication. Furthermore, our participants, including those who do not using any security mechanisms on their phones, were very positive about CASA and amenable to using it on their phones.
{"title":"CASA: context-aware scalable authentication","authors":"Eiji Hayashi, Sauvik Das, Shahriyar Amini, Jason I. Hong, Ian Oakley","doi":"10.1145/2501604.2501607","DOIUrl":"https://doi.org/10.1145/2501604.2501607","url":null,"abstract":"We introduce context-aware scalable authentication (CASA) as a way of balancing security and usability for authentication. Our core idea is to choose an appropriate form of active authentication (e.g., typing a PIN) based on the combination of multiple passive factors (e.g., a user's current location) for authentication. We provide a probabilistic framework for dynamically selecting an active authentication scheme that satisfies a specified security requirement given passive factors. We also present the results of three user studies evaluating the feasibility and users' receptiveness of our concept. Our results suggest that location data has good potential as a passive factor, and that users can reduce up to 68% of active authentications when using an implementation of CASA, compared to always using fixed active authentication. Furthermore, our participants, including those who do not using any security mechanisms on their phones, were very positive about CASA and amenable to using it on their phones.","PeriodicalId":273244,"journal":{"name":"Symposium On Usable Privacy and Security","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123151081","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
D. V. Bruggen, Shu Liu, Mitchell D. Kajzer, A. Striegel, C. Crowell, J. D'Arcy
With an increasing number of organizations allowing personal smart phones onto their networks, considerable security risk is introduced. The security risk is exacerbated by the tremendous heterogeneity of the personal mobile devices and their respective installed pool of applications. Furthermore, by virtue of the devices not being owned by the organization, the ability to authoritatively enforce organizational security polices is challenging. As a result, a critical part of organizational security is the ability to drive user security behavior through either on-device mechanisms or security awareness programs. In this paper, we establish a baseline for user security behavior from a population of over one hundred fifty smart phone users. We then systematically evaluate the ability to drive behavioral change via messaging centered on morality, deterrence, and incentives. Our findings suggest that appeals to morality are most effective over time, whereas deterrence produces the most immediate reaction. Additionally, our findings show that while a significant portion of users are securing their devices without prior intervention, it is difficult to influence change in those who do not.
{"title":"Modifying smartphone user locking behavior","authors":"D. V. Bruggen, Shu Liu, Mitchell D. Kajzer, A. Striegel, C. Crowell, J. D'Arcy","doi":"10.1145/2501604.2501614","DOIUrl":"https://doi.org/10.1145/2501604.2501614","url":null,"abstract":"With an increasing number of organizations allowing personal smart phones onto their networks, considerable security risk is introduced. The security risk is exacerbated by the tremendous heterogeneity of the personal mobile devices and their respective installed pool of applications. Furthermore, by virtue of the devices not being owned by the organization, the ability to authoritatively enforce organizational security polices is challenging. As a result, a critical part of organizational security is the ability to drive user security behavior through either on-device mechanisms or security awareness programs. In this paper, we establish a baseline for user security behavior from a population of over one hundred fifty smart phone users. We then systematically evaluate the ability to drive behavioral change via messaging centered on morality, deterrence, and incentives. Our findings suggest that appeals to morality are most effective over time, whereas deterrence produces the most immediate reaction. Additionally, our findings show that while a significant portion of users are securing their devices without prior intervention, it is difficult to influence change in those who do not.","PeriodicalId":273244,"journal":{"name":"Symposium On Usable Privacy and Security","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124168913","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Access control policies describe high level requirements for access control systems. Access control rule sets ideally translate these policies into a coherent and manageable collection of Allow/Deny rules. Designing rule sets that reflect desired policies is a difficult and time-consuming task. The result is that rule sets are difficult to understand and manage. The goal of this paper is to provide means for obtaining usable access control rule sets, which we define as rule sets that (i) reflect the access control policy and (ii) are easy to understand and manage. In this paper, we formally define the challenges that users face when generating usable access control rule sets and provide formal tools to handle them more easily. We started our research with a pilot study in which specialists were interviewed. The objective was to list usability challenges regarding the management of access control rule sets and verify how those challenges were handled by specialists. The results of the pilot study were compared and combined with results from related work and refined into six novel, formally defined metrics that are used to measure the security and usability aspects of access control rule sets. We validated our findings with two user studies, which demonstrate that our metrics help users generate statistically significant better rule sets.
{"title":"Formal definitions for usable access control rule sets from goals to metrics","authors":"Matthias Beckerle, L. Martucci","doi":"10.1145/2501604.2501606","DOIUrl":"https://doi.org/10.1145/2501604.2501606","url":null,"abstract":"Access control policies describe high level requirements for access control systems. Access control rule sets ideally translate these policies into a coherent and manageable collection of Allow/Deny rules. Designing rule sets that reflect desired policies is a difficult and time-consuming task. The result is that rule sets are difficult to understand and manage. The goal of this paper is to provide means for obtaining usable access control rule sets, which we define as rule sets that (i) reflect the access control policy and (ii) are easy to understand and manage. In this paper, we formally define the challenges that users face when generating usable access control rule sets and provide formal tools to handle them more easily. We started our research with a pilot study in which specialists were interviewed. The objective was to list usability challenges regarding the management of access control rule sets and verify how those challenges were handled by specialists. The results of the pilot study were compared and combined with results from related work and refined into six novel, formally defined metrics that are used to measure the security and usability aspects of access control rule sets. We validated our findings with two user studies, which demonstrate that our metrics help users generate statistically significant better rule sets.","PeriodicalId":273244,"journal":{"name":"Symposium On Usable Privacy and Security","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125929136","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}