Pub Date : 2014-10-01DOI: 10.4108/ICST.COLLABORATECOM.2014.257262
Napoleon Paxton, Dae-il Jang, I. S. Moskowitz, Gail-Joon Ahn, S. Russell
Botnets continue to threaten the security landscape of computer networks worldwide. This is due in part to the time lag present between discovery of botnet traffic and identification of actionable intelligence derived from the traffic analysis. In this article we present a novel method to fill such a gap by segmenting botnet traffic into communities and identifying the category of each community member. This information can be used to identify attack members (bot nodes), command and control members (Command and Control nodes), botnet controller members (botmaster nodes), and victim members (victim nodes). All of which can be used immediately in forensics or in defense of future attacks. The true novelty of our approach is the segmentation of the malicious network data into relational communities and not just spatially based clusters. The relational nature of the communities allows us to discover the community roles without a deep analysis of the entire network. We discuss the feasibility and practicality of our method through experiments with real-world botnet traffic. Our experimental results show a high detection rate with a low false positive rate, which gives encouragement that our approach can be a valuable addition to a defense in depth strategy.
{"title":"Discovering and analyzing deviant communities: Methods and experiments","authors":"Napoleon Paxton, Dae-il Jang, I. S. Moskowitz, Gail-Joon Ahn, S. Russell","doi":"10.4108/ICST.COLLABORATECOM.2014.257262","DOIUrl":"https://doi.org/10.4108/ICST.COLLABORATECOM.2014.257262","url":null,"abstract":"Botnets continue to threaten the security landscape of computer networks worldwide. This is due in part to the time lag present between discovery of botnet traffic and identification of actionable intelligence derived from the traffic analysis. In this article we present a novel method to fill such a gap by segmenting botnet traffic into communities and identifying the category of each community member. This information can be used to identify attack members (bot nodes), command and control members (Command and Control nodes), botnet controller members (botmaster nodes), and victim members (victim nodes). All of which can be used immediately in forensics or in defense of future attacks. The true novelty of our approach is the segmentation of the malicious network data into relational communities and not just spatially based clusters. The relational nature of the communities allows us to discover the community roles without a deep analysis of the entire network. We discuss the feasibility and practicality of our method through experiments with real-world botnet traffic. Our experimental results show a high detection rate with a low false positive rate, which gives encouragement that our approach can be a valuable addition to a defense in depth strategy.","PeriodicalId":432345,"journal":{"name":"10th IEEE International Conference on Collaborative Computing: Networking, Applications and Worksharing","volume":"2012 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133272462","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-10-01DOI: 10.4108/ICST.COLLABORATECOM.2014.257474
Amir Abdolrashidi, Lakshmish Ramaswamy, David S. Narron
Distributed vertex-centric graph processing systems have been recently proposed to perform different types of analytics on large graphs. These systems utilize the parallelism of shared nothing clusters. In this work we propose a novel model for the performance cost of such clusters.We also define novel metrics related to the workload balance and network communication cost of clusters processing massive real graph datasets. We empirically investigate the effects of different graph partitioning mechanisms and their tradeoff for two different categories of graph processing algorithms.
{"title":"Performance modeling of computation and communication tradeoffs in vertex-centric graph processing clusters","authors":"Amir Abdolrashidi, Lakshmish Ramaswamy, David S. Narron","doi":"10.4108/ICST.COLLABORATECOM.2014.257474","DOIUrl":"https://doi.org/10.4108/ICST.COLLABORATECOM.2014.257474","url":null,"abstract":"Distributed vertex-centric graph processing systems have been recently proposed to perform different types of analytics on large graphs. These systems utilize the parallelism of shared nothing clusters. In this work we propose a novel model for the performance cost of such clusters.We also define novel metrics related to the workload balance and network communication cost of clusters processing massive real graph datasets. We empirically investigate the effects of different graph partitioning mechanisms and their tradeoff for two different categories of graph processing algorithms.","PeriodicalId":432345,"journal":{"name":"10th IEEE International Conference on Collaborative Computing: Networking, Applications and Worksharing","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126452564","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-10-01DOI: 10.4108/ICST.COLLABORATECOM.2014.257231
Aleksandar Karadimce, D. Davcev
Smart mobile devices are already used for everyday communication between people. They have become ubiquitous devices for receiving and sharing important information, but they poses limited capability of storing and processing multimedia content. Considering that mobile devices will always have these mentioned limitations, the proposed solution is to use collaborative, adaptive multimedia content delivery and the cloud computing technology. In order to gain increased perception of quality of m-learning systems, the proposed model delivers three services for selection, transcoding and authoring multimedia content, which should be hosted by SaaS cloud computing model. The proposed model for collaborative and adaptive multimedia content delivery will provide the mobile learners with cloud-based services. These services will facilitate the delivery of the preferred multimedia content across heterogeneous network to various types of mobile devices that learners are using. The advantage of the proposed framework is proved by using a sample mobile application for our case study, which estimates the operation of the service for multimedia content selection and authoring service.
{"title":"Model for collaborative and adaptive multimedia content delivery in a collaborative m-learning environment","authors":"Aleksandar Karadimce, D. Davcev","doi":"10.4108/ICST.COLLABORATECOM.2014.257231","DOIUrl":"https://doi.org/10.4108/ICST.COLLABORATECOM.2014.257231","url":null,"abstract":"Smart mobile devices are already used for everyday communication between people. They have become ubiquitous devices for receiving and sharing important information, but they poses limited capability of storing and processing multimedia content. Considering that mobile devices will always have these mentioned limitations, the proposed solution is to use collaborative, adaptive multimedia content delivery and the cloud computing technology. In order to gain increased perception of quality of m-learning systems, the proposed model delivers three services for selection, transcoding and authoring multimedia content, which should be hosted by SaaS cloud computing model. The proposed model for collaborative and adaptive multimedia content delivery will provide the mobile learners with cloud-based services. These services will facilitate the delivery of the preferred multimedia content across heterogeneous network to various types of mobile devices that learners are using. The advantage of the proposed framework is proved by using a sample mobile application for our case study, which estimates the operation of the service for multimedia content selection and authoring service.","PeriodicalId":432345,"journal":{"name":"10th IEEE International Conference on Collaborative Computing: Networking, Applications and Worksharing","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125355273","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-10-01DOI: 10.4108/ICST.COLLABORATECOM.2014.257313
S. Chaumette, Jonathan Ouoba
It is acknowledged that in a mobile environment the wireless technologies that are available at each node can be exploited so as to achieve efficient peer-to-peer communications. Therefore we have developed a multilevel platform the goal of which is to allow a set of mobile terminals to securely communicate in a peer-to-peer manner by using the most appropriate available technology according to the context at hand. The scenario that we have chosen to focus on targets information sharing for collaboration purpose between the mobile nodes of the network. The study of this scenario led us to identify the main operations that are required to achieve it, the central process being the publication of profiles. This operation is meant to allow a node to publish a description (that we call a profile) of the information it is willing to share with the other nodes. Two approaches, direct transmission and relay transmission, are considered so that the publication of profiles can be performed in the most efficient way (according to the available communication technologies and the dynamics of the network). In this paper, we first present the target environments that we consider and the approach that we have chosen to implement in our multilevel platform. We then focus on the publication of profiles and we highlight the two transmission modes (direct transmission and relay transmission) that we have chosen to consider. We analytically study and compare them in terms of the probability to successfully deliver a given message in the target context defined above. We conclude with future research directions.
{"title":"Direct transmission vs relay transmission for information dissemination in a MANet: an analytical study","authors":"S. Chaumette, Jonathan Ouoba","doi":"10.4108/ICST.COLLABORATECOM.2014.257313","DOIUrl":"https://doi.org/10.4108/ICST.COLLABORATECOM.2014.257313","url":null,"abstract":"It is acknowledged that in a mobile environment the wireless technologies that are available at each node can be exploited so as to achieve efficient peer-to-peer communications. Therefore we have developed a multilevel platform the goal of which is to allow a set of mobile terminals to securely communicate in a peer-to-peer manner by using the most appropriate available technology according to the context at hand. The scenario that we have chosen to focus on targets information sharing for collaboration purpose between the mobile nodes of the network. The study of this scenario led us to identify the main operations that are required to achieve it, the central process being the publication of profiles. This operation is meant to allow a node to publish a description (that we call a profile) of the information it is willing to share with the other nodes. Two approaches, direct transmission and relay transmission, are considered so that the publication of profiles can be performed in the most efficient way (according to the available communication technologies and the dynamics of the network). In this paper, we first present the target environments that we consider and the approach that we have chosen to implement in our multilevel platform. We then focus on the publication of profiles and we highlight the two transmission modes (direct transmission and relay transmission) that we have chosen to consider. We analytically study and compare them in terms of the probability to successfully deliver a given message in the target context defined above. We conclude with future research directions.","PeriodicalId":432345,"journal":{"name":"10th IEEE International Conference on Collaborative Computing: Networking, Applications and Worksharing","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126884367","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-10-01DOI: 10.4108/ICST.COLLABORATECOM.2014.257493
Fayez Khazalah, Zaki Malik, A. Rezgui
Extracting Ontop mapping rules and OWL ontology manually from a relational schema is a tedious task. We present an automatic approach for extracting Ontop mappings and OWL ontology from an existing database schema. The end users can access the underlying data source through SPARQL queries. A SPARQL query is written according to the extracted ontology and the end user does not need to know about the underlying data source and its schema. The proposed approach takes into consideration the different relationships between entities of the database schema. Instead of extracting a flat ontology that is an exact copy of the database schema, it extracts a rich ontology. The extracted ontology can also be used as an intermediate between a domain ontology and the underlying database schema. The experiment results indicate that the extracted mappings and ontology are accurate. i.e., end users can query all data (using SPARQL) from the underlying database source in the same way as if they have written SQL queries.
{"title":"Automatic mapping rules and OWL ontology extraction for the OBDA Ontop","authors":"Fayez Khazalah, Zaki Malik, A. Rezgui","doi":"10.4108/ICST.COLLABORATECOM.2014.257493","DOIUrl":"https://doi.org/10.4108/ICST.COLLABORATECOM.2014.257493","url":null,"abstract":"Extracting Ontop mapping rules and OWL ontology manually from a relational schema is a tedious task. We present an automatic approach for extracting Ontop mappings and OWL ontology from an existing database schema. The end users can access the underlying data source through SPARQL queries. A SPARQL query is written according to the extracted ontology and the end user does not need to know about the underlying data source and its schema. The proposed approach takes into consideration the different relationships between entities of the database schema. Instead of extracting a flat ontology that is an exact copy of the database schema, it extracts a rich ontology. The extracted ontology can also be used as an intermediate between a domain ontology and the underlying database schema. The experiment results indicate that the extracted mappings and ontology are accurate. i.e., end users can query all data (using SPARQL) from the underlying database source in the same way as if they have written SQL queries.","PeriodicalId":432345,"journal":{"name":"10th IEEE International Conference on Collaborative Computing: Networking, Applications and Worksharing","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122567964","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-10-01DOI: 10.4108/ICST.COLLABORATECOM.2014.257245
Dave Murray-Rust, Ognjen Scekic, P. Papapanagiotou, Hong Linh Truong, D. Robertson, S. Dustdar
Today's crowdsourcing systems are predominantly used for processing independent tasks with simplistic coordination. As such, they offer limited support for handling complex, intellectually and organizationally challenging labour types, such as software development. In order to support crowdsourcing of the software development processes, the system needs to enact coordination mechanisms which integrate human creativity with machine support. While workflows can be used to handle highly-structured and predictable labour processes, they are less suitable for software development methodologies where unpredictability is an unavoidable part the process. This is especially true in phases of requirement elicitation and feature development, when both the client and development communities change with time. In this paper we present models and techniques for coordination of human workers in crowdsourced software development environments. The techniques augment the existing Social Compute Unit (SCU) concept-a general framework for management of ad-hoc human worker teams-with versatile coordination protocols expressed in the Lightweight Social Calculus (LSC). This approach allows us to combine coordination and quality constraints with dynamic assessments of software-user's desires, while dynamically choosing appropriate software development coordination models.
今天的众包系统主要用于处理简单协调的独立任务。因此,它们为处理复杂的、智力上和组织上具有挑战性的劳动类型(例如软件开发)提供有限的支持。为了支持软件开发过程的众包,系统需要制定协调机制,将人类的创造力与机器的支持结合起来。虽然工作流可以用于处理高度结构化和可预测的劳动过程,但它们不太适合软件开发方法,因为不可预测性是过程中不可避免的一部分。在需求引出和特性开发阶段尤其如此,因为客户和开发社区都随着时间而变化。在本文中,我们提出了在众包软件开发环境中协调人类工作者的模型和技术。这些技术通过轻量级社会演算(Lightweight Social Calculus, LSC)中表达的通用协调协议,增强了现有的社会计算单元(Social Compute Unit, SCU)概念——一种用于管理临时人力团队的通用框架。这种方法允许我们将协调和质量约束与软件用户需求的动态评估结合起来,同时动态地选择合适的软件开发协调模型。
{"title":"A collaboration model for community-based Software Development with social machines","authors":"Dave Murray-Rust, Ognjen Scekic, P. Papapanagiotou, Hong Linh Truong, D. Robertson, S. Dustdar","doi":"10.4108/ICST.COLLABORATECOM.2014.257245","DOIUrl":"https://doi.org/10.4108/ICST.COLLABORATECOM.2014.257245","url":null,"abstract":"Today's crowdsourcing systems are predominantly used for processing independent tasks with simplistic coordination. As such, they offer limited support for handling complex, intellectually and organizationally challenging labour types, such as software development. In order to support crowdsourcing of the software development processes, the system needs to enact coordination mechanisms which integrate human creativity with machine support. While workflows can be used to handle highly-structured and predictable labour processes, they are less suitable for software development methodologies where unpredictability is an unavoidable part the process. This is especially true in phases of requirement elicitation and feature development, when both the client and development communities change with time. In this paper we present models and techniques for coordination of human workers in crowdsourced software development environments. The techniques augment the existing Social Compute Unit (SCU) concept-a general framework for management of ad-hoc human worker teams-with versatile coordination protocols expressed in the Lightweight Social Calculus (LSC). This approach allows us to combine coordination and quality constraints with dynamic assessments of software-user's desires, while dynamically choosing appropriate software development coordination models.","PeriodicalId":432345,"journal":{"name":"10th IEEE International Conference on Collaborative Computing: Networking, Applications and Worksharing","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133960558","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-10-01DOI: 10.4108/ICST.COLLABORATECOM.2014.257621
Rajesh Vargheese, Y. Viniotis
The US healthcare Affordable Care Act established the 30 day readmission protection program as one of the base lines of measuring quality of care at hospitals and post discharge. With reduced payment penalties for hospitals with excessive readmissions, hospitals have increased their focus on managing post discharge care. With the emphasis on prevention and proactive care, integrated approaches that have the ability to collect relevant data from patients, process it efficiently and timely and predict risk patterns in advance and enable seamless collaboration between the patients and the care team is required. This allows care teams to proactively manage the care of the patient and limit complications and readmissions. Internet of things enabled collaborative cloud based e-health is evolving as one of the key transformation approaches in helping to address the 30 day readmission avoidance efforts. While the sensors provide critical data, there are significant constraints in terms of processing, power, storage and overall context. The power and capabilities of the cloud can augment the local visibility of sensors by providing capabilities that the sensors lack. In this work, we define these capabilities as the five P's: Provisioning, Policy Management, Processing, Protection and Prediction. We argue that by bringing these elements together, the e-health architecture is able to take the data from sensors securely and transfer it to the cloud and generate insights and actions that help improve healthcare outcomes in a timely manner. The Cloud management plays a critical role in ensuring the integrity and availability of vital information. The blind spots in the unavailability of data or compromised data can result in missed opportunities for proactive care; ours proposed architecture ensures data availability, processing availability and integrity and thus is very important in a 30 day readmission context.
{"title":"Influencing data availability in IoT enabled cloud based e-health in a 30 day readmission context","authors":"Rajesh Vargheese, Y. Viniotis","doi":"10.4108/ICST.COLLABORATECOM.2014.257621","DOIUrl":"https://doi.org/10.4108/ICST.COLLABORATECOM.2014.257621","url":null,"abstract":"The US healthcare Affordable Care Act established the 30 day readmission protection program as one of the base lines of measuring quality of care at hospitals and post discharge. With reduced payment penalties for hospitals with excessive readmissions, hospitals have increased their focus on managing post discharge care. With the emphasis on prevention and proactive care, integrated approaches that have the ability to collect relevant data from patients, process it efficiently and timely and predict risk patterns in advance and enable seamless collaboration between the patients and the care team is required. This allows care teams to proactively manage the care of the patient and limit complications and readmissions. Internet of things enabled collaborative cloud based e-health is evolving as one of the key transformation approaches in helping to address the 30 day readmission avoidance efforts. While the sensors provide critical data, there are significant constraints in terms of processing, power, storage and overall context. The power and capabilities of the cloud can augment the local visibility of sensors by providing capabilities that the sensors lack. In this work, we define these capabilities as the five P's: Provisioning, Policy Management, Processing, Protection and Prediction. We argue that by bringing these elements together, the e-health architecture is able to take the data from sensors securely and transfer it to the cloud and generate insights and actions that help improve healthcare outcomes in a timely manner. The Cloud management plays a critical role in ensuring the integrity and availability of vital information. The blind spots in the unavailability of data or compromised data can result in missed opportunities for proactive care; ours proposed architecture ensures data availability, processing availability and integrity and thus is very important in a 30 day readmission context.","PeriodicalId":432345,"journal":{"name":"10th IEEE International Conference on Collaborative Computing: Networking, Applications and Worksharing","volume":"162 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124553821","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-04-07DOI: 10.4108/ICST.COLLABORATECOM.2014.257584
Zhe Wang, N. Minsky
Conventional online social networks (OSNs) are implemented in a centralized manner. Although centralization is a convenient way for implementing OSNs, it has several well known drawbacks. Chief among them are the risks they pose to the security and privacy of the information maintained by the OSN; and the loss of control over the information contributed by individual members. These concerns prompted several attempts to create decentralized OSNs, or DOSNs. The basic idea underlying these attempts, is that each member of a social network keeps its data under its own control, instead of surrendering it to a central host; providing access to it to other members of the OSN according to its own access-control policy. Unfortunately all existing DOSN projects have a very serious limitation. Namely, they are unable to subject the membership of a DOSN, and the interaction between its members, to any global policy. We adopt the decentralization idea underlying DOSNs, complementing it with a means for specifying and enforcing a wide range of policies over the membership of a social community, and over the interaction between its disparate distributed members. And we do so in a scalable fashion.
传统的在线社交网络(online social network, osn)采用集中式的实现方式。虽然集中化是实现osn的一种方便方法,但它有几个众所周知的缺点。其中主要是对OSN所维护的信息的安全性和隐私性构成的风险;以及对个人成员提供的信息失去控制。这些担忧促使人们尝试创建分散的osn (dosn)。这些尝试背后的基本思想是,社交网络的每个成员都将自己的数据置于自己的控制之下,而不是将其交给中央主机;根据自己的访问控制策略,向OSN的其他成员提供访问权限。不幸的是,所有现有的DOSN项目都有非常严重的限制。也就是说,它们不能使DOSN的成员以及成员之间的交互服从任何全局策略。我们采用了dosn基础上的去中心化思想,并用一种方法对社会社区的成员以及不同的分布式成员之间的交互指定和执行广泛的策略来补充它。我们以一种可扩展的方式这样做。
{"title":"Establishing global policies over decentralized online social networks","authors":"Zhe Wang, N. Minsky","doi":"10.4108/ICST.COLLABORATECOM.2014.257584","DOIUrl":"https://doi.org/10.4108/ICST.COLLABORATECOM.2014.257584","url":null,"abstract":"Conventional online social networks (OSNs) are implemented in a centralized manner. Although centralization is a convenient way for implementing OSNs, it has several well known drawbacks. Chief among them are the risks they pose to the security and privacy of the information maintained by the OSN; and the loss of control over the information contributed by individual members. These concerns prompted several attempts to create decentralized OSNs, or DOSNs. The basic idea underlying these attempts, is that each member of a social network keeps its data under its own control, instead of surrendering it to a central host; providing access to it to other members of the OSN according to its own access-control policy. Unfortunately all existing DOSN projects have a very serious limitation. Namely, they are unable to subject the membership of a DOSN, and the interaction between its members, to any global policy. We adopt the decentralization idea underlying DOSNs, complementing it with a means for specifying and enforcing a wide range of policies over the membership of a social community, and over the interaction between its disparate distributed members. And we do so in a scalable fashion.","PeriodicalId":432345,"journal":{"name":"10th IEEE International Conference on Collaborative Computing: Networking, Applications and Worksharing","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129875910","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}