Pub Date : 2013-09-08DOI: 10.1109/SocialCom.2013.21
Nibir Bora, V. Zaytsev, Yu-Han Chang, R. Maheswaran
Social media generated by location-services-enabled cellular devices produce enormous amounts of location-based content. Spatiotemporal analysis of such data facilitate new ways of modeling human behavior and mobility patterns. In this paper, we use over 10 millions geo-tagged tweets from the city of Los Angeles as observations of human movement and apply them to understand the relationships of geographical regions, neighborhoods and gang territories. Using a graph based-representation of street gang territories as vertices and interactions between them as edges, we train a machine learning classifier to tell apart rival and non-rival links. We correctly identify 89% of the true rivalry network, which beats a standard baseline by about 30%. Looking at larger neighborhoods, we were able to show that distance traveled from home follows a power-law distribution, and the direction of displacement, i.e., the distribution of movement direction, can be used as a profile to identify physical (or geographic) barriers when it is not uniform. Finally, considering the temporal dimension of tweets, we detect events taking place around the city by identifying irregularities in tweeting patterns.
{"title":"Gang Networks, Neighborhoods and Holidays: Spatiotemporal Patterns in Social Media","authors":"Nibir Bora, V. Zaytsev, Yu-Han Chang, R. Maheswaran","doi":"10.1109/SocialCom.2013.21","DOIUrl":"https://doi.org/10.1109/SocialCom.2013.21","url":null,"abstract":"Social media generated by location-services-enabled cellular devices produce enormous amounts of location-based content. Spatiotemporal analysis of such data facilitate new ways of modeling human behavior and mobility patterns. In this paper, we use over 10 millions geo-tagged tweets from the city of Los Angeles as observations of human movement and apply them to understand the relationships of geographical regions, neighborhoods and gang territories. Using a graph based-representation of street gang territories as vertices and interactions between them as edges, we train a machine learning classifier to tell apart rival and non-rival links. We correctly identify 89% of the true rivalry network, which beats a standard baseline by about 30%. Looking at larger neighborhoods, we were able to show that distance traveled from home follows a power-law distribution, and the direction of displacement, i.e., the distribution of movement direction, can be used as a profile to identify physical (or geographic) barriers when it is not uniform. Finally, considering the temporal dimension of tweets, we detect events taking place around the city by identifying irregularities in tweeting patterns.","PeriodicalId":129308,"journal":{"name":"2013 International Conference on Social Computing","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125928517","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-09-08DOI: 10.1109/SOCIALCOM.2013.142
Dennis G. Castleberry, Steven R. Brandt, F. Löffler
This paper details Inkling, a generalized executable paper system for generating hypermedia. Whereas a traditional paper has static content derived from the data, i.e. tables, charts, graphs, and animations, the executable paper dynamically generates these using an underlying code and editable input parameters specified in the paper itself. By use of a language which may be seamlessly incorporated into the paper text and made transparent to the reader or reviewer, the system allows for ease of both use and validation. Novel in our system is (1)generality, in that it provides a generic coupling between the paper-generating infrastructure and the backend science code, (2) a minimalist text-based human-readable input format which abstracts algorithms from the reader and reviewer, (3) out-of-order dependency-based execution, which allows the author to chain outputs to inputs, and (4) a scheme for building a database of author-contributed codes which may be easily shared, reused and referenced.
{"title":"Inkling: An Executable Paper System for Reviewing Scientific Applications","authors":"Dennis G. Castleberry, Steven R. Brandt, F. Löffler","doi":"10.1109/SOCIALCOM.2013.142","DOIUrl":"https://doi.org/10.1109/SOCIALCOM.2013.142","url":null,"abstract":"This paper details Inkling, a generalized executable paper system for generating hypermedia. Whereas a traditional paper has static content derived from the data, i.e. tables, charts, graphs, and animations, the executable paper dynamically generates these using an underlying code and editable input parameters specified in the paper itself. By use of a language which may be seamlessly incorporated into the paper text and made transparent to the reader or reviewer, the system allows for ease of both use and validation. Novel in our system is (1)generality, in that it provides a generic coupling between the paper-generating infrastructure and the backend science code, (2) a minimalist text-based human-readable input format which abstracts algorithms from the reader and reviewer, (3) out-of-order dependency-based execution, which allows the author to chain outputs to inputs, and (4) a scheme for building a database of author-contributed codes which may be easily shared, reused and referenced.","PeriodicalId":129308,"journal":{"name":"2013 International Conference on Social Computing","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130465737","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-09-08DOI: 10.1109/SocialCom.2013.65
Xing Xie, I. Ray, R. Adaikkalavan
Data Stream Management Systems (DSMSs) address the data processing needs of situational monitoring applications, where data must be collected on-the-fly and processed in real-time. Sensitive data in situational monitoring applications must be processed such that there is no leakage of confidential information. Towards this end, we design a DSMS that allows continuous queries to be executed on multilevel secure (MLS) data in an efficient and secure manner. We provide a prototype to demonstrate the feasibility of our ideas and present some experimental results that discuss the overhead and performance gain of our approach.
{"title":"On the Efficient Processing of Multilevel Secure Continuous Queries","authors":"Xing Xie, I. Ray, R. Adaikkalavan","doi":"10.1109/SocialCom.2013.65","DOIUrl":"https://doi.org/10.1109/SocialCom.2013.65","url":null,"abstract":"Data Stream Management Systems (DSMSs) address the data processing needs of situational monitoring applications, where data must be collected on-the-fly and processed in real-time. Sensitive data in situational monitoring applications must be processed such that there is no leakage of confidential information. Towards this end, we design a DSMS that allows continuous queries to be executed on multilevel secure (MLS) data in an efficient and secure manner. We provide a prototype to demonstrate the feasibility of our ideas and present some experimental results that discuss the overhead and performance gain of our approach.","PeriodicalId":129308,"journal":{"name":"2013 International Conference on Social Computing","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115252888","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-09-08DOI: 10.1109/SocialCom.2013.156
Shaymaa Khater, Hicham G. Elmongui, D. Gračanin
Microblogs are specialized virtual social network web-based applications. Nowadays, following the microblogs is becoming more challenging as users can receive thousands of corpus updates every day. Going through all the corpuses updates is a time consuming process and affects the user's productivity in real life, especially for the users who have a lot of followees and thousands of tweets arriving at their timelines everyday. In this paper, we propose a personalized recommendation system that aims at giving the user a summary of all received corpuses. Considering the fact that the user interests changes over time, this summary should be based on the user's level of interest in the topic of the corpus at the time of reception. Our method considers three major elements: users's dynamic level of interest in a topic, user's social relationship such as the number of followers, their real geographical neighborhood, and other explicit features related to the publishers authority and the tweet's content.
{"title":"Personalized Microblogs Corpus Recommendation Based on Dynamic Users Interests","authors":"Shaymaa Khater, Hicham G. Elmongui, D. Gračanin","doi":"10.1109/SocialCom.2013.156","DOIUrl":"https://doi.org/10.1109/SocialCom.2013.156","url":null,"abstract":"Microblogs are specialized virtual social network web-based applications. Nowadays, following the microblogs is becoming more challenging as users can receive thousands of corpus updates every day. Going through all the corpuses updates is a time consuming process and affects the user's productivity in real life, especially for the users who have a lot of followees and thousands of tweets arriving at their timelines everyday. In this paper, we propose a personalized recommendation system that aims at giving the user a summary of all received corpuses. Considering the fact that the user interests changes over time, this summary should be based on the user's level of interest in the topic of the corpus at the time of reception. Our method considers three major elements: users's dynamic level of interest in a topic, user's social relationship such as the number of followers, their real geographical neighborhood, and other explicit features related to the publishers authority and the tweet's content.","PeriodicalId":129308,"journal":{"name":"2013 International Conference on Social Computing","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116515345","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-09-08DOI: 10.1109/SocialCom.2013.165
Hyejung Moon, H. Cho
The purpose of this paper is as follows. First, I am trying to conceptualize big data as a social problem. Second, I would like to explain the difference between big data and conventional mega information. Third, I would like to recommend the role of the government for utilization of big data as a policy tools. Fourth, while referring to copyright and CCL(Creative Commons License) cases, I would like to explain the regulation for big data on data sovereignty. Finally, I would like to suggest a direction of policy design for big data. As for the result of this study, policy design for big data should be distinguished from policy design for mega information to solve data sovereignty issues. From a law system perspective, big data is generated autonomously. It has been accessed openly and shared without any intention. In market perspective, big data is created without any intention. Big data can be changed automatically in case of openness with reference feature such as Linked of Data. Some policy issues such as responsibility and authenticity should be raised. Big data is generated in a distributed and diverse way without any concrete form in technology perspective. So, we need a different approach.
{"title":"Big Data and Policy Design for Data Sovereignty: A Case Study on Copyright and CCL in South Korea","authors":"Hyejung Moon, H. Cho","doi":"10.1109/SocialCom.2013.165","DOIUrl":"https://doi.org/10.1109/SocialCom.2013.165","url":null,"abstract":"The purpose of this paper is as follows. First, I am trying to conceptualize big data as a social problem. Second, I would like to explain the difference between big data and conventional mega information. Third, I would like to recommend the role of the government for utilization of big data as a policy tools. Fourth, while referring to copyright and CCL(Creative Commons License) cases, I would like to explain the regulation for big data on data sovereignty. Finally, I would like to suggest a direction of policy design for big data. As for the result of this study, policy design for big data should be distinguished from policy design for mega information to solve data sovereignty issues. From a law system perspective, big data is generated autonomously. It has been accessed openly and shared without any intention. In market perspective, big data is created without any intention. Big data can be changed automatically in case of openness with reference feature such as Linked of Data. Some policy issues such as responsibility and authenticity should be raised. Big data is generated in a distributed and diverse way without any concrete form in technology perspective. So, we need a different approach.","PeriodicalId":129308,"journal":{"name":"2013 International Conference on Social Computing","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128755362","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-09-08DOI: 10.1109/SocialCom.2013.53
Olga Peled, Michael Fire, L. Rokach, Y. Elovici
In recent years, Online Social Networks (OSNs) have essentially become an integral part of our daily lives. There are hundreds of OSNs, each with its own focus and offers for particular services and functionalities. To take advantage of the full range of services and functionalities that OSNs offer, users often create several accounts on various OSNs using the same or different personal information. Retrieving all available data about an individual from several OSNs and merging it into one profile can be useful for many purposes. In this paper, we present a method for solving the Entity Resolution (ER), problem for matching user profiles across multiple OSNs. Our algorithm is able to match two user profiles from two different OSNs based on machine learning techniques, which uses features extracted from each one of the user profiles. Using supervised learning techniques and extracted features, we constructed different classifiers, which were then trained and used to rank the probability that two user profiles from two different OSNs belong to the same individual. These classifiers utilized 27 features of mainly three types: name based features (i.e., the Soundex value of two names), general user info based features (i.e., the cosine similarity between two user profiles), and social network topological based features (i.e., the number of mutual friends between two users' friends list). This experimental study uses real-life data collected from two popular OSNs, Facebook and Xing. The proposed algorithm was evaluated and its classification performance measured by AUC was 0.982 in identifying user profiles across two OSNs.
{"title":"Entity Matching in Online Social Networks","authors":"Olga Peled, Michael Fire, L. Rokach, Y. Elovici","doi":"10.1109/SocialCom.2013.53","DOIUrl":"https://doi.org/10.1109/SocialCom.2013.53","url":null,"abstract":"In recent years, Online Social Networks (OSNs) have essentially become an integral part of our daily lives. There are hundreds of OSNs, each with its own focus and offers for particular services and functionalities. To take advantage of the full range of services and functionalities that OSNs offer, users often create several accounts on various OSNs using the same or different personal information. Retrieving all available data about an individual from several OSNs and merging it into one profile can be useful for many purposes. In this paper, we present a method for solving the Entity Resolution (ER), problem for matching user profiles across multiple OSNs. Our algorithm is able to match two user profiles from two different OSNs based on machine learning techniques, which uses features extracted from each one of the user profiles. Using supervised learning techniques and extracted features, we constructed different classifiers, which were then trained and used to rank the probability that two user profiles from two different OSNs belong to the same individual. These classifiers utilized 27 features of mainly three types: name based features (i.e., the Soundex value of two names), general user info based features (i.e., the cosine similarity between two user profiles), and social network topological based features (i.e., the number of mutual friends between two users' friends list). This experimental study uses real-life data collected from two popular OSNs, Facebook and Xing. The proposed algorithm was evaluated and its classification performance measured by AUC was 0.982 in identifying user profiles across two OSNs.","PeriodicalId":129308,"journal":{"name":"2013 International Conference on Social Computing","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128784623","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-09-08DOI: 10.1109/SocialCom.2013.29
Zahy Bnaya, Rami Puzis, Roni Stern, Ariel Felner
In many cases the best way to find a profile or a set of profiles matching some criteria in a social network is via targeted crawling. An important challenge in targeted crawling is to choose the next profile to explore. Existing heuristics for targeted crawling are usually tailored for specific search criterion and could lead to short-sighted crawling decisions. In this paper we propose and evaluate a generic approach for guiding a social network crawler that aims to provide a proper balance between exploration and exploitation based on the recently introduced variant of the Multi-Armed Bandit problem with volatile arms (VMAB). Our approach is general-purpose. In addition, it provides provable performance guarantees. Experimental results indicate that our approach compares favorably with the best existing heuristics on two different domains.
{"title":"Bandit Algorithms for Social Network Queries","authors":"Zahy Bnaya, Rami Puzis, Roni Stern, Ariel Felner","doi":"10.1109/SocialCom.2013.29","DOIUrl":"https://doi.org/10.1109/SocialCom.2013.29","url":null,"abstract":"In many cases the best way to find a profile or a set of profiles matching some criteria in a social network is via targeted crawling. An important challenge in targeted crawling is to choose the next profile to explore. Existing heuristics for targeted crawling are usually tailored for specific search criterion and could lead to short-sighted crawling decisions. In this paper we propose and evaluate a generic approach for guiding a social network crawler that aims to provide a proper balance between exploration and exploitation based on the recently introduced variant of the Multi-Armed Bandit problem with volatile arms (VMAB). Our approach is general-purpose. In addition, it provides provable performance guarantees. Experimental results indicate that our approach compares favorably with the best existing heuristics on two different domains.","PeriodicalId":129308,"journal":{"name":"2013 International Conference on Social Computing","volume":"58 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126238535","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-09-08DOI: 10.1109/SocialCom.2013.60
Haifeng Liu, Zheng Hu, Dian Zhou, Hui Tian
User behavior analysis and prediction has been widely applied in personalized search, advertising precise delivery and other personalized services. It is a core problem how to evaluate the performance of prediction models or algorithms. The most used off-line experiment is a simple and convenient evaluation strategy. However, the existing assessment measures are most based on arithmetic average value theory, such as precision, recall, F measure, mean absolute error (MAE), root mean squared error (RMSE) etc. These approaches have two drawbacks. First, they cannot depict the prediction performance within a more fine-grained view and they only provide one average value to compare different algorithms' performances. Second, they are not reasonable if the evaluation results are not follow normal distribution. In this paper, according to analyze a mass of prediction evaluation results, we find that some performance evaluation results follow approximate power low distribution but not normal distribution. Therefore, the paper proposes a cumulative probability distribution model to evaluate the performance of prediction algorithms. The model first calculates the probability of each evaluation results. And then, it depicts the cumulative probability distribution function. Moreover, we further present an evaluation expectation value (EEV) to represent the overall performance of the prediction algorithms. Experiments on two real data sets show that the proposed model can provide deeper and more accurate assessment results.
{"title":"Cumulative Probability Distribution Model for Evaluating User Behavior Prediction Algorithms","authors":"Haifeng Liu, Zheng Hu, Dian Zhou, Hui Tian","doi":"10.1109/SocialCom.2013.60","DOIUrl":"https://doi.org/10.1109/SocialCom.2013.60","url":null,"abstract":"User behavior analysis and prediction has been widely applied in personalized search, advertising precise delivery and other personalized services. It is a core problem how to evaluate the performance of prediction models or algorithms. The most used off-line experiment is a simple and convenient evaluation strategy. However, the existing assessment measures are most based on arithmetic average value theory, such as precision, recall, F measure, mean absolute error (MAE), root mean squared error (RMSE) etc. These approaches have two drawbacks. First, they cannot depict the prediction performance within a more fine-grained view and they only provide one average value to compare different algorithms' performances. Second, they are not reasonable if the evaluation results are not follow normal distribution. In this paper, according to analyze a mass of prediction evaluation results, we find that some performance evaluation results follow approximate power low distribution but not normal distribution. Therefore, the paper proposes a cumulative probability distribution model to evaluate the performance of prediction algorithms. The model first calculates the probability of each evaluation results. And then, it depicts the cumulative probability distribution function. Moreover, we further present an evaluation expectation value (EEV) to represent the overall performance of the prediction algorithms. Experiments on two real data sets show that the proposed model can provide deeper and more accurate assessment results.","PeriodicalId":129308,"journal":{"name":"2013 International Conference on Social Computing","volume":"382 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122022543","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-09-08DOI: 10.1109/SocialCom.2013.106
Salim Jouili, Valentin Vansteenberghe
In recent years, more and more companies provide services that can not be anymore achieved efficiently using relational databases. As such, these companies are forced to use alternative database models such as XML databases, object-oriented databases, document-oriented databases and, more recently graph databases. Graph databases only exist for a few years. Although there have been some comparison attempts, they are mostly focused on certain aspects only. In this paper, we present a distributed graph database comparison framework and the results we obtained by comparing four important players in the graph databases market: Neo4j, Orient DB, Titan and DEX.
{"title":"An Empirical Comparison of Graph Databases","authors":"Salim Jouili, Valentin Vansteenberghe","doi":"10.1109/SocialCom.2013.106","DOIUrl":"https://doi.org/10.1109/SocialCom.2013.106","url":null,"abstract":"In recent years, more and more companies provide services that can not be anymore achieved efficiently using relational databases. As such, these companies are forced to use alternative database models such as XML databases, object-oriented databases, document-oriented databases and, more recently graph databases. Graph databases only exist for a few years. Although there have been some comparison attempts, they are mostly focused on certain aspects only. In this paper, we present a distributed graph database comparison framework and the results we obtained by comparing four important players in the graph databases market: Neo4j, Orient DB, Titan and DEX.","PeriodicalId":129308,"journal":{"name":"2013 International Conference on Social Computing","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114356947","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-09-08DOI: 10.1109/SocialCom.2013.76
Vincent C. Hu, K. Scarfone
Access control (AC) policies can be implemented based on different AC models, which are fundamentally composed by semantically independent AC rules in expressions of privilege assignments described by attributes of subjects/attributes, actions, objects/attributes, and environment variables of the protected systems. Incorrect implementations of AC policies result in faults that not only leak but also disable access of information, and faults in AC policies are difficult to detect without support of verification or automatic fault detection mechanisms. This research proposes an automatic method through the construction of a simulated logic circuit that simulates AC rules in AC policies or models. The simulated logic circuit allows real-time detection of policy faults including conflicts of privilege assignments, leaks of information, and conflicts of interest assignments. Such detection is traditionally done by tools that perform verification or testing after all the rules of the policy/model are completed, and it provides no information about the source of verification errors. The real-time fault detecting capability proposed by this research allows a rule fault to be detected and fixed immediately before the next rule is added to the policy/model, thus requiring no later verification and saving a significant amount of fault fixing time.
{"title":"Real-Time Access Control Rule Fault Detection Using a Simulated Logic Circuit","authors":"Vincent C. Hu, K. Scarfone","doi":"10.1109/SocialCom.2013.76","DOIUrl":"https://doi.org/10.1109/SocialCom.2013.76","url":null,"abstract":"Access control (AC) policies can be implemented based on different AC models, which are fundamentally composed by semantically independent AC rules in expressions of privilege assignments described by attributes of subjects/attributes, actions, objects/attributes, and environment variables of the protected systems. Incorrect implementations of AC policies result in faults that not only leak but also disable access of information, and faults in AC policies are difficult to detect without support of verification or automatic fault detection mechanisms. This research proposes an automatic method through the construction of a simulated logic circuit that simulates AC rules in AC policies or models. The simulated logic circuit allows real-time detection of policy faults including conflicts of privilege assignments, leaks of information, and conflicts of interest assignments. Such detection is traditionally done by tools that perform verification or testing after all the rules of the policy/model are completed, and it provides no information about the source of verification errors. The real-time fault detecting capability proposed by this research allows a rule fault to be detected and fixed immediately before the next rule is added to the policy/model, thus requiring no later verification and saving a significant amount of fault fixing time.","PeriodicalId":129308,"journal":{"name":"2013 International Conference on Social Computing","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129069529","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}