Pub Date : 2013-09-08DOI: 10.1109/SocialCom.2013.115
Carl M. Gustafson
In this paper, I build from scratch a basic agent-based macroeconomic model, featuring fifty representative agents whose decisions to consume and save depend on the current relative performance of the economy at-large. I run three different experiments in the framework: the first, on the effects of tax and spending "flexibility" on stabilizing output, the second, on the ability of spending stimulus to stabilize output, and the third, on redistributive measures across income groups and their effects on aggregate economic performance. I find that tax and spending flexibility accelerates the path back to stability after an initial imposed downturn, that spending stimulus does much the same, though with a greater initial "kick", and that redistribution in this model may take place and increase the welfare of lower-income agents without imposing a significant burden on overall performance.
{"title":"Three Fiscal Policy Experiments in an Agent-Based Macroeconomic Model","authors":"Carl M. Gustafson","doi":"10.1109/SocialCom.2013.115","DOIUrl":"https://doi.org/10.1109/SocialCom.2013.115","url":null,"abstract":"In this paper, I build from scratch a basic agent-based macroeconomic model, featuring fifty representative agents whose decisions to consume and save depend on the current relative performance of the economy at-large. I run three different experiments in the framework: the first, on the effects of tax and spending \"flexibility\" on stabilizing output, the second, on the ability of spending stimulus to stabilize output, and the third, on redistributive measures across income groups and their effects on aggregate economic performance. I find that tax and spending flexibility accelerates the path back to stability after an initial imposed downturn, that spending stimulus does much the same, though with a greater initial \"kick\", and that redistribution in this model may take place and increase the welfare of lower-income agents without imposing a significant burden on overall performance.","PeriodicalId":129308,"journal":{"name":"2013 International Conference on Social Computing","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132814718","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-09-08DOI: 10.1109/SocialCom.2013.32
Sedat Gokalp, M. Temkit, H. Davulcu, I. H. Toroslu
Blogosphere plays an increasingly important role as a forum for public debate. In this paper, given a mixed set of blogs debating a set of political issues from opposing camps, we use signed bipartite graphs for modeling debates, and we propose an algorithm for partitioning both the blogs, and the issues (i.e. topics, leaders, etc.) comprising the debate into binary opposing camps. Simultaneously, our algorithm scales both the blogs and the underlying issues on a univariate scale. Using this scale, a researcher can identify moderate and extreme blogs within each camp, and polarizing vs. unifying issues. Through performance evaluations we show that our proposed algorithm provides an effective solution to the problem, and performs much better than existing baseline algorithms adapted to solve this new problem. In our experiments, we used both real data from political blogosphere and US Congress records, as well as synthetic data which were obtained by varying polarization and degree distribution of the vertices of the graph to show the robustness of our algorithm.
{"title":"Partitioning and Scaling Signed Bipartite Graphs for Polarized Political Blogosphere","authors":"Sedat Gokalp, M. Temkit, H. Davulcu, I. H. Toroslu","doi":"10.1109/SocialCom.2013.32","DOIUrl":"https://doi.org/10.1109/SocialCom.2013.32","url":null,"abstract":"Blogosphere plays an increasingly important role as a forum for public debate. In this paper, given a mixed set of blogs debating a set of political issues from opposing camps, we use signed bipartite graphs for modeling debates, and we propose an algorithm for partitioning both the blogs, and the issues (i.e. topics, leaders, etc.) comprising the debate into binary opposing camps. Simultaneously, our algorithm scales both the blogs and the underlying issues on a univariate scale. Using this scale, a researcher can identify moderate and extreme blogs within each camp, and polarizing vs. unifying issues. Through performance evaluations we show that our proposed algorithm provides an effective solution to the problem, and performs much better than existing baseline algorithms adapted to solve this new problem. In our experiments, we used both real data from political blogosphere and US Congress records, as well as synthetic data which were obtained by varying polarization and degree distribution of the vertices of the graph to show the robustness of our algorithm.","PeriodicalId":129308,"journal":{"name":"2013 International Conference on Social Computing","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130968686","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-09-08DOI: 10.1109/SocialCom.2013.53
Olga Peled, Michael Fire, L. Rokach, Y. Elovici
In recent years, Online Social Networks (OSNs) have essentially become an integral part of our daily lives. There are hundreds of OSNs, each with its own focus and offers for particular services and functionalities. To take advantage of the full range of services and functionalities that OSNs offer, users often create several accounts on various OSNs using the same or different personal information. Retrieving all available data about an individual from several OSNs and merging it into one profile can be useful for many purposes. In this paper, we present a method for solving the Entity Resolution (ER), problem for matching user profiles across multiple OSNs. Our algorithm is able to match two user profiles from two different OSNs based on machine learning techniques, which uses features extracted from each one of the user profiles. Using supervised learning techniques and extracted features, we constructed different classifiers, which were then trained and used to rank the probability that two user profiles from two different OSNs belong to the same individual. These classifiers utilized 27 features of mainly three types: name based features (i.e., the Soundex value of two names), general user info based features (i.e., the cosine similarity between two user profiles), and social network topological based features (i.e., the number of mutual friends between two users' friends list). This experimental study uses real-life data collected from two popular OSNs, Facebook and Xing. The proposed algorithm was evaluated and its classification performance measured by AUC was 0.982 in identifying user profiles across two OSNs.
{"title":"Entity Matching in Online Social Networks","authors":"Olga Peled, Michael Fire, L. Rokach, Y. Elovici","doi":"10.1109/SocialCom.2013.53","DOIUrl":"https://doi.org/10.1109/SocialCom.2013.53","url":null,"abstract":"In recent years, Online Social Networks (OSNs) have essentially become an integral part of our daily lives. There are hundreds of OSNs, each with its own focus and offers for particular services and functionalities. To take advantage of the full range of services and functionalities that OSNs offer, users often create several accounts on various OSNs using the same or different personal information. Retrieving all available data about an individual from several OSNs and merging it into one profile can be useful for many purposes. In this paper, we present a method for solving the Entity Resolution (ER), problem for matching user profiles across multiple OSNs. Our algorithm is able to match two user profiles from two different OSNs based on machine learning techniques, which uses features extracted from each one of the user profiles. Using supervised learning techniques and extracted features, we constructed different classifiers, which were then trained and used to rank the probability that two user profiles from two different OSNs belong to the same individual. These classifiers utilized 27 features of mainly three types: name based features (i.e., the Soundex value of two names), general user info based features (i.e., the cosine similarity between two user profiles), and social network topological based features (i.e., the number of mutual friends between two users' friends list). This experimental study uses real-life data collected from two popular OSNs, Facebook and Xing. The proposed algorithm was evaluated and its classification performance measured by AUC was 0.982 in identifying user profiles across two OSNs.","PeriodicalId":129308,"journal":{"name":"2013 International Conference on Social Computing","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128784623","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-09-08DOI: 10.1109/SocialCom.2013.165
Hyejung Moon, H. Cho
The purpose of this paper is as follows. First, I am trying to conceptualize big data as a social problem. Second, I would like to explain the difference between big data and conventional mega information. Third, I would like to recommend the role of the government for utilization of big data as a policy tools. Fourth, while referring to copyright and CCL(Creative Commons License) cases, I would like to explain the regulation for big data on data sovereignty. Finally, I would like to suggest a direction of policy design for big data. As for the result of this study, policy design for big data should be distinguished from policy design for mega information to solve data sovereignty issues. From a law system perspective, big data is generated autonomously. It has been accessed openly and shared without any intention. In market perspective, big data is created without any intention. Big data can be changed automatically in case of openness with reference feature such as Linked of Data. Some policy issues such as responsibility and authenticity should be raised. Big data is generated in a distributed and diverse way without any concrete form in technology perspective. So, we need a different approach.
{"title":"Big Data and Policy Design for Data Sovereignty: A Case Study on Copyright and CCL in South Korea","authors":"Hyejung Moon, H. Cho","doi":"10.1109/SocialCom.2013.165","DOIUrl":"https://doi.org/10.1109/SocialCom.2013.165","url":null,"abstract":"The purpose of this paper is as follows. First, I am trying to conceptualize big data as a social problem. Second, I would like to explain the difference between big data and conventional mega information. Third, I would like to recommend the role of the government for utilization of big data as a policy tools. Fourth, while referring to copyright and CCL(Creative Commons License) cases, I would like to explain the regulation for big data on data sovereignty. Finally, I would like to suggest a direction of policy design for big data. As for the result of this study, policy design for big data should be distinguished from policy design for mega information to solve data sovereignty issues. From a law system perspective, big data is generated autonomously. It has been accessed openly and shared without any intention. In market perspective, big data is created without any intention. Big data can be changed automatically in case of openness with reference feature such as Linked of Data. Some policy issues such as responsibility and authenticity should be raised. Big data is generated in a distributed and diverse way without any concrete form in technology perspective. So, we need a different approach.","PeriodicalId":129308,"journal":{"name":"2013 International Conference on Social Computing","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128755362","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-09-08DOI: 10.1109/SocialCom.2013.76
Vincent C. Hu, K. Scarfone
Access control (AC) policies can be implemented based on different AC models, which are fundamentally composed by semantically independent AC rules in expressions of privilege assignments described by attributes of subjects/attributes, actions, objects/attributes, and environment variables of the protected systems. Incorrect implementations of AC policies result in faults that not only leak but also disable access of information, and faults in AC policies are difficult to detect without support of verification or automatic fault detection mechanisms. This research proposes an automatic method through the construction of a simulated logic circuit that simulates AC rules in AC policies or models. The simulated logic circuit allows real-time detection of policy faults including conflicts of privilege assignments, leaks of information, and conflicts of interest assignments. Such detection is traditionally done by tools that perform verification or testing after all the rules of the policy/model are completed, and it provides no information about the source of verification errors. The real-time fault detecting capability proposed by this research allows a rule fault to be detected and fixed immediately before the next rule is added to the policy/model, thus requiring no later verification and saving a significant amount of fault fixing time.
{"title":"Real-Time Access Control Rule Fault Detection Using a Simulated Logic Circuit","authors":"Vincent C. Hu, K. Scarfone","doi":"10.1109/SocialCom.2013.76","DOIUrl":"https://doi.org/10.1109/SocialCom.2013.76","url":null,"abstract":"Access control (AC) policies can be implemented based on different AC models, which are fundamentally composed by semantically independent AC rules in expressions of privilege assignments described by attributes of subjects/attributes, actions, objects/attributes, and environment variables of the protected systems. Incorrect implementations of AC policies result in faults that not only leak but also disable access of information, and faults in AC policies are difficult to detect without support of verification or automatic fault detection mechanisms. This research proposes an automatic method through the construction of a simulated logic circuit that simulates AC rules in AC policies or models. The simulated logic circuit allows real-time detection of policy faults including conflicts of privilege assignments, leaks of information, and conflicts of interest assignments. Such detection is traditionally done by tools that perform verification or testing after all the rules of the policy/model are completed, and it provides no information about the source of verification errors. The real-time fault detecting capability proposed by this research allows a rule fault to be detected and fixed immediately before the next rule is added to the policy/model, thus requiring no later verification and saving a significant amount of fault fixing time.","PeriodicalId":129308,"journal":{"name":"2013 International Conference on Social Computing","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129069529","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-09-08DOI: 10.1109/SocialCom.2013.108
J. Monti, Mario Monteleone, Maria Pia di Buono, Federica Marano
Extracting relevant information in multilingual context from massive amounts of unstructured, structured and semi-structured data is a challenging task. Various theories have been developed and applied to ease the access to multicultural and multilingual resources. This papers describes a methodology for the development of an ontology-based Cross-Language Information Retrieval (CLIR) application and shows how it is possible to achieve the translation of Natural Language (NL) queries in any language by means of a knowledge-driven approach which allows to semi-automatically map natural language to formal language, simplifying and improving in this way the human-computer interaction and communication. The outlined research activities are based on Lexicon-Grammar (LG), a method devised for natural language formalization, automatic textual analysis and parsing. Thanks to its main characteristics, LG is independent from factors which are critical for other approaches, i.e. interaction type (voice or keyboard-based), length of sentences and propositions, type of vocabulary used and restrictions due to users' idiolects. The feasibility of our knowledge-based methodological framework, which allows mapping both data and metadata, will be tested for CLIR by implementing a domain-specific early prototype system.
{"title":"Natural Language Processing and Big Data - An Ontology-Based Approach for Cross-Lingual Information Retrieval","authors":"J. Monti, Mario Monteleone, Maria Pia di Buono, Federica Marano","doi":"10.1109/SocialCom.2013.108","DOIUrl":"https://doi.org/10.1109/SocialCom.2013.108","url":null,"abstract":"Extracting relevant information in multilingual context from massive amounts of unstructured, structured and semi-structured data is a challenging task. Various theories have been developed and applied to ease the access to multicultural and multilingual resources. This papers describes a methodology for the development of an ontology-based Cross-Language Information Retrieval (CLIR) application and shows how it is possible to achieve the translation of Natural Language (NL) queries in any language by means of a knowledge-driven approach which allows to semi-automatically map natural language to formal language, simplifying and improving in this way the human-computer interaction and communication. The outlined research activities are based on Lexicon-Grammar (LG), a method devised for natural language formalization, automatic textual analysis and parsing. Thanks to its main characteristics, LG is independent from factors which are critical for other approaches, i.e. interaction type (voice or keyboard-based), length of sentences and propositions, type of vocabulary used and restrictions due to users' idiolects. The feasibility of our knowledge-based methodological framework, which allows mapping both data and metadata, will be tested for CLIR by implementing a domain-specific early prototype system.","PeriodicalId":129308,"journal":{"name":"2013 International Conference on Social Computing","volume":"84 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132610423","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-09-08DOI: 10.1109/SocialCom.2013.69
Michael J. Oehler, D. Phatak
Our contribution defines a conjunction operator for private stream searching. Private stream searching is a system of cryptographic methods that preserves the confidentiality of the search criteria and the result. The system uses an encrypted filter to conceal the search terms, processes a search without decrypting these terms, and saves the result to an encrypted buffer. Fundamentally, the system provides a private search capability based on a logical disjunction of search terms. Our conjunction operator broadens the search capability, and achieves this without significantly increasing the complexity of the private search system. The conjunction is processed as a bit wise summation of hashed keyword values to reference an encrypted entry in the filter. The method is best suited for a conjunction of fields from a record, does not impute a calculation of bilinear map, as required in prior research, and offers a practical utility that integrates into private stream searching. We demonstrate the practicality by including the conjunction operator into our domain specific language for private packet filtering.
{"title":"A Conjunction for Private Stream Searching","authors":"Michael J. Oehler, D. Phatak","doi":"10.1109/SocialCom.2013.69","DOIUrl":"https://doi.org/10.1109/SocialCom.2013.69","url":null,"abstract":"Our contribution defines a conjunction operator for private stream searching. Private stream searching is a system of cryptographic methods that preserves the confidentiality of the search criteria and the result. The system uses an encrypted filter to conceal the search terms, processes a search without decrypting these terms, and saves the result to an encrypted buffer. Fundamentally, the system provides a private search capability based on a logical disjunction of search terms. Our conjunction operator broadens the search capability, and achieves this without significantly increasing the complexity of the private search system. The conjunction is processed as a bit wise summation of hashed keyword values to reference an encrypted entry in the filter. The method is best suited for a conjunction of fields from a record, does not impute a calculation of bilinear map, as required in prior research, and offers a practical utility that integrates into private stream searching. We demonstrate the practicality by including the conjunction operator into our domain specific language for private packet filtering.","PeriodicalId":129308,"journal":{"name":"2013 International Conference on Social Computing","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131728012","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-09-08DOI: 10.1109/SocialCom.2013.119
M. Erdmann, Erik Ward, K. Ikeda, Gen Hattori, C. Ono, Y. Takishima
Twitter is a popular medium for sharing opinions on TV programs, and the analysis of TV related tweets is attracting a lot of interest. However, when collecting all tweets containing a given TV program title, we obtain a large number of unrelated tweets, due to the fact that many of the TV program titles are ambiguous. Using supervised learning, TV related tweets can be collected with high accuracy. The goal of our proposed method is to automate the labeling process, in order to eliminate the cost required for data labeling without sacrificing classification accuracy. When creating the training data, we use only tweets of unambiguous TV program titles. In order to decide whether a TV program title is ambiguous, we automatically determine whether it can be used as a common expression or named entity. In two experiments, in which we collected tweets for 32 ambiguous TV program titles, we achieved the same (78.2%) or even higher classification accuracy (79.1%) with automatically labeled training data as with manually labeled data, while effectively eliminating labeling costs.
{"title":"Automatic Labeling of Training Data for Collecting Tweets for Ambiguous TV Program Titles","authors":"M. Erdmann, Erik Ward, K. Ikeda, Gen Hattori, C. Ono, Y. Takishima","doi":"10.1109/SocialCom.2013.119","DOIUrl":"https://doi.org/10.1109/SocialCom.2013.119","url":null,"abstract":"Twitter is a popular medium for sharing opinions on TV programs, and the analysis of TV related tweets is attracting a lot of interest. However, when collecting all tweets containing a given TV program title, we obtain a large number of unrelated tweets, due to the fact that many of the TV program titles are ambiguous. Using supervised learning, TV related tweets can be collected with high accuracy. The goal of our proposed method is to automate the labeling process, in order to eliminate the cost required for data labeling without sacrificing classification accuracy. When creating the training data, we use only tweets of unambiguous TV program titles. In order to decide whether a TV program title is ambiguous, we automatically determine whether it can be used as a common expression or named entity. In two experiments, in which we collected tweets for 32 ambiguous TV program titles, we achieved the same (78.2%) or even higher classification accuracy (79.1%) with automatically labeled training data as with manually labeled data, while effectively eliminating labeling costs.","PeriodicalId":129308,"journal":{"name":"2013 International Conference on Social Computing","volume":"111 3S 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131968723","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-09-08DOI: 10.1109/SocialCom.2013.170
Michael S. Kim
A novel robust anomaly detection algorithm is applied to an image dataset using Apache Pig, Jython and GNU Octave. Each image in the set is transformed into a feature vector that represents color, edges, and texture numerically. Data is streamed using Pig through standard and user defined GNU Octave functions for feature transformation. Once the image set is transformed into the feature space, the dataset matrix (where the rows are distinct images, and the columns are features) is input into an original anomaly detection algorithm written by the author. This unsupervised outlier detection method scores outliers in linear time. The method is linear in the number of outliers but still suffers from the curse of dimensionality (in the feature space). The top scoring images are considered anomalies. Two experiments are conducted. The first experiment tests if top scoring images coincide with images which are marked as outliers in a prior image selection step. The second examines the scalability of the implementation in Pig using a larger data set. The results are analyzed quantitatively and qualitatively.
{"title":"Robust, Scalable Anomaly Detection for Large Collections of Images","authors":"Michael S. Kim","doi":"10.1109/SocialCom.2013.170","DOIUrl":"https://doi.org/10.1109/SocialCom.2013.170","url":null,"abstract":"A novel robust anomaly detection algorithm is applied to an image dataset using Apache Pig, Jython and GNU Octave. Each image in the set is transformed into a feature vector that represents color, edges, and texture numerically. Data is streamed using Pig through standard and user defined GNU Octave functions for feature transformation. Once the image set is transformed into the feature space, the dataset matrix (where the rows are distinct images, and the columns are features) is input into an original anomaly detection algorithm written by the author. This unsupervised outlier detection method scores outliers in linear time. The method is linear in the number of outliers but still suffers from the curse of dimensionality (in the feature space). The top scoring images are considered anomalies. Two experiments are conducted. The first experiment tests if top scoring images coincide with images which are marked as outliers in a prior image selection step. The second examines the scalability of the implementation in Pig using a larger data set. The results are analyzed quantitatively and qualitatively.","PeriodicalId":129308,"journal":{"name":"2013 International Conference on Social Computing","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130271455","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-09-08DOI: 10.1109/SocialCom.2013.97
Achim D. Brucker, Francesco Malmignati, M. Merabti, Q. Shi, Bo Zhou
Modern applications are inherently heterogeneous: they are built by composing loosely coupled services that are, usually, offered and operated by different service providers. While this approach increases the flexibility of the composed applications, it makes the implementation of security and trustworthiness requirements difficult. As the number of security requirements is increasing dramatically, there is a need for new approaches that integrate security requirements right from the beginning while composing service-based applications. In this paper, we present a framework for secure service composition using a model-based approach for specifying, building, and executing composed services. As a unique feature, this framework integrates security requirements as a first class citizen and, thus, avoids the ``security as an afterthought'' paradigm.
{"title":"A Framework for Secure Service Composition","authors":"Achim D. Brucker, Francesco Malmignati, M. Merabti, Q. Shi, Bo Zhou","doi":"10.1109/SocialCom.2013.97","DOIUrl":"https://doi.org/10.1109/SocialCom.2013.97","url":null,"abstract":"Modern applications are inherently heterogeneous: they are built by composing loosely coupled services that are, usually, offered and operated by different service providers. While this approach increases the flexibility of the composed applications, it makes the implementation of security and trustworthiness requirements difficult. As the number of security requirements is increasing dramatically, there is a need for new approaches that integrate security requirements right from the beginning while composing service-based applications. In this paper, we present a framework for secure service composition using a model-based approach for specifying, building, and executing composed services. As a unique feature, this framework integrates security requirements as a first class citizen and, thus, avoids the ``security as an afterthought'' paradigm.","PeriodicalId":129308,"journal":{"name":"2013 International Conference on Social Computing","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131644532","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}