首页 > 最新文献

Proceedings of the 20th International Middleware Conference最新文献

英文 中文
FabricCRDT: A Conflict-Free Replicated Datatypes Approach to Permissioned Blockchains FabricCRDT:许可区块链的无冲突复制数据类型方法
Pub Date : 2019-12-09 DOI: 10.1145/3361525.3361540
Pezhman Nasirifard, R. Mayer, H. Jacobsen
With the increased adaption of blockchain technologies, permissioned blockchains such as Hyperledger Fabric provide a robust ecosystem for developing production-grade decentralized applications. However, the additional latency between executing and committing transactions, due to Fabric's three-phase transaction lifecycle of Execute-Order-Validate (EOV), is a potential scalability bottleneck. The added latency increases the probability of concurrent updates on the same keys by different transactions, leading to transaction failures caused by Fabric's concurrency control mechanism. The transaction failures increase the application development complexity and decrease Fabric's throughput. Conflict-free Replicated Datatypes (CRDTs) provide a solution for merging and resolving conflicts in the presence of concurrent updates. In this work, we introduce FabricCRDT, an approach for integrating CRDTs to Fabric. Our evaluations show that in general, FabricCRDT offers higher throughput of successful transactions than Fabric, while successfully committing and merging all conflicting transactions without any failures.
随着区块链技术的日益普及,像Hyperledger Fabric这样的许可区块链为开发生产级去中心化应用程序提供了一个强大的生态系统。然而,由于Fabric的执行-订单-验证(EOV)的三阶段事务生命周期,执行和提交事务之间的额外延迟是一个潜在的可伸缩性瓶颈。增加的延迟增加了不同事务对相同键进行并发更新的可能性,导致Fabric的并发控制机制导致事务失败。事务失败增加了应用程序开发的复杂性,降低了Fabric的吞吐量。无冲突复制数据类型(crdt)为合并和解决并发更新时的冲突提供了一种解决方案。在这项工作中,我们介绍了FabricCRDT,一种将crdt集成到Fabric的方法。我们的评估表明,总的来说,FabricCRDT提供了比Fabric更高的成功事务吞吐量,同时成功地提交和合并了所有冲突的事务,没有任何失败。
{"title":"FabricCRDT: A Conflict-Free Replicated Datatypes Approach to Permissioned Blockchains","authors":"Pezhman Nasirifard, R. Mayer, H. Jacobsen","doi":"10.1145/3361525.3361540","DOIUrl":"https://doi.org/10.1145/3361525.3361540","url":null,"abstract":"With the increased adaption of blockchain technologies, permissioned blockchains such as Hyperledger Fabric provide a robust ecosystem for developing production-grade decentralized applications. However, the additional latency between executing and committing transactions, due to Fabric's three-phase transaction lifecycle of Execute-Order-Validate (EOV), is a potential scalability bottleneck. The added latency increases the probability of concurrent updates on the same keys by different transactions, leading to transaction failures caused by Fabric's concurrency control mechanism. The transaction failures increase the application development complexity and decrease Fabric's throughput. Conflict-free Replicated Datatypes (CRDTs) provide a solution for merging and resolving conflicts in the presence of concurrent updates. In this work, we introduce FabricCRDT, an approach for integrating CRDTs to Fabric. Our evaluations show that in general, FabricCRDT offers higher throughput of successful transactions than Fabric, while successfully committing and merging all conflicting transactions without any failures.","PeriodicalId":381253,"journal":{"name":"Proceedings of the 20th International Middleware Conference","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116081604","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 23
OS-Augmented Oversubscription of Opportunistic Memory with a User-Assisted OOM Killer 使用用户辅助OOM杀手的操作系统增强的机会性内存超额订阅
Pub Date : 2019-12-09 DOI: 10.1145/3361525.3361534
Wei Chen, Aidi Pi, Shaoqi Wang, Xiaobo Zhou
Exploiting opportunistic memory by oversubscription is an appealing approach to improving cluster utilization and throughput. In this paper, we find the efficacy of memory oversubscription depends on whether or not the oversubscribed tasks can be killed by an OutOf Memory (OOM) killer in a timely manner to avoid significant memory thrashing upon memory pressure. However, current approaches in modern cluster schedulers are actually unable to unleash the power of opportunistic memory because their user space OOM killers are unable to timely deliver a task killing signal to terminate the oversubscribed tasks. Our experiments observe that a user space OOM killer fails to do that because of lacking the memory pressure knowledge from OS while the kernel space Linux OOM killer is too conservative to relieve memory pressure. In this paper, we design a user-assisted OOM killer (namely UA killer) in kernel space, an OS augmentation for accurate thrashing detection and agile task killing. To identify a thrashing task, UA killer features a novel mechanism, constraint thrashing. Upon UA killer, we develop Charon, a cluster scheduler for oversubscription of opportunistic memory in an on-demand manner. We implement Charon upon Mercury, a state-of-the-art opportunistic cluster scheduler. Extensive experiments with a Google trace in a 26-node cluster show that Charon can: (1) achieve agile task killing, (2) improve the best-effort job throughput by 3.5X over Mercury while prioritizing the production jobs, and (3) improve the 90th job completion time of production jobs over Kubernetes opportunistic scheduler by 62%.
通过超额订阅来利用机会内存是提高集群利用率和吞吐量的一种很有吸引力的方法。在本文中,我们发现内存超额订阅的有效性取决于超额订阅的任务能否及时被内存溢出(OOM)杀手杀死,以避免在内存压力下出现显著的内存抖动。然而,现代集群调度器中的当前方法实际上无法释放机会性内存的力量,因为它们的用户空间OOM杀手无法及时发送任务终止信号来终止超额订阅的任务。我们的实验发现,用户空间的OOM杀手由于缺乏来自操作系统的内存压力知识而无法做到这一点,而内核空间的Linux OOM杀手由于过于保守而无法缓解内存压力。在本文中,我们在内核空间设计了一个用户辅助的OOM杀手(即UA杀手),这是对精确抖动检测和敏捷任务杀死的操作系统增强。为了识别鞭打任务,UA杀手采用了一种新的机制——约束鞭打。在UA杀手的基础上,我们开发了Charon,这是一个集群调度器,用于按需方式超额订阅机会性内存。我们在Mercury上实现Charon,这是一个最先进的机会集群调度器。在26个节点的集群中使用Google跟踪进行的大量实验表明,Charon可以:(1)实现敏捷的任务终止,(2)在优先处理生产作业的同时,比Mercury提高3.5倍的最佳工作吞吐量,(3)将生产作业的第90个作业完成时间比Kubernetes机会调度器提高62%。
{"title":"OS-Augmented Oversubscription of Opportunistic Memory with a User-Assisted OOM Killer","authors":"Wei Chen, Aidi Pi, Shaoqi Wang, Xiaobo Zhou","doi":"10.1145/3361525.3361534","DOIUrl":"https://doi.org/10.1145/3361525.3361534","url":null,"abstract":"Exploiting opportunistic memory by oversubscription is an appealing approach to improving cluster utilization and throughput. In this paper, we find the efficacy of memory oversubscription depends on whether or not the oversubscribed tasks can be killed by an OutOf Memory (OOM) killer in a timely manner to avoid significant memory thrashing upon memory pressure. However, current approaches in modern cluster schedulers are actually unable to unleash the power of opportunistic memory because their user space OOM killers are unable to timely deliver a task killing signal to terminate the oversubscribed tasks. Our experiments observe that a user space OOM killer fails to do that because of lacking the memory pressure knowledge from OS while the kernel space Linux OOM killer is too conservative to relieve memory pressure. In this paper, we design a user-assisted OOM killer (namely UA killer) in kernel space, an OS augmentation for accurate thrashing detection and agile task killing. To identify a thrashing task, UA killer features a novel mechanism, constraint thrashing. Upon UA killer, we develop Charon, a cluster scheduler for oversubscription of opportunistic memory in an on-demand manner. We implement Charon upon Mercury, a state-of-the-art opportunistic cluster scheduler. Extensive experiments with a Google trace in a 26-node cluster show that Charon can: (1) achieve agile task killing, (2) improve the best-effort job throughput by 3.5X over Mercury while prioritizing the production jobs, and (3) improve the 90th job completion time of production jobs over Kubernetes opportunistic scheduler by 62%.","PeriodicalId":381253,"journal":{"name":"Proceedings of the 20th International Middleware Conference","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114185478","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
ReLAQS
Pub Date : 2019-12-09 DOI: 10.1145/3361525.3361553
Logan Stafman, Andrew Or, M. Freedman
Approximate Query Processing has become increasingly popular as larger data sizes have increased query latency in distributed query processing systems. To provide such approximate results, systems return intermediate results and iteratively update these approximations as they process more data. In shared clusters, however, these systems waste resources by directing resources to queries that are no longer improving the results given to users. We describe ReLAQS, a cluster scheduling system for online aggregation queries that aims to reduce latency by assigning resources to queries with the most potential for improvement. ReLAQS utilizes the approximate results each query returns to periodically estimate how much progress each concurrent query is currently making. It then uses this information to predict how much progress each query is expected to make in the near future and redistributes resources in real-time to maximize the overall quality of the answers returned across the cluster. Experiments show that ReLAQS achieves a reduction in latency of up to 47% compared to traditional fair schedulers.
{"title":"ReLAQS","authors":"Logan Stafman, Andrew Or, M. Freedman","doi":"10.1145/3361525.3361553","DOIUrl":"https://doi.org/10.1145/3361525.3361553","url":null,"abstract":"Approximate Query Processing has become increasingly popular as larger data sizes have increased query latency in distributed query processing systems. To provide such approximate results, systems return intermediate results and iteratively update these approximations as they process more data. In shared clusters, however, these systems waste resources by directing resources to queries that are no longer improving the results given to users. We describe ReLAQS, a cluster scheduling system for online aggregation queries that aims to reduce latency by assigning resources to queries with the most potential for improvement. ReLAQS utilizes the approximate results each query returns to periodically estimate how much progress each concurrent query is currently making. It then uses this information to predict how much progress each query is expected to make in the near future and redistributes resources in real-time to maximize the overall quality of the answers returned across the cluster. Experiments show that ReLAQS achieves a reduction in latency of up to 47% compared to traditional fair schedulers.","PeriodicalId":381253,"journal":{"name":"Proceedings of the 20th International Middleware Conference","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130906810","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Combining it all: Cost minimal and low-latency stream processing across distributed heterogeneous infrastructures 将这一切结合起来:跨分布式异构基础设施的低成本和低延迟流处理
Pub Date : 2019-12-09 DOI: 10.1145/3361525.3361551
Henriette Röger, Sukanya Bhowmik, K. Rothermel
Control mechanisms of stream processing applications (SPAs) that ensure latency bounds at minimal runtime cost mostly target a specific infrastructure, e.g., homogeneous nodes. With the growing popularity of the Internet of Things, fog, and edge computing, SPAs are more often distributed on heterogeneous infrastructures, triggering the need for a holistic SPA-control that still considers heterogeneity. We therefore combine individual control mechanisms via the latency-distribution problem that seeks to distribute latency budgets to individually managed components of distributed SPAs for a lightweight yet effective end-to-end control. To this end, we introduce a hierarchical control architecture, give a formal definition of the latency-distribution problem, and provide both an ILP formulation to find an optimal solution as well as a heuristic approach, thereby enabling the combination of individual control mechanisms into one SPA while ensuring global cost minimality. Our evaluations show that both solutions are effective---while the heuristic approach is only slightly more costly than the optimal ILP solution, it significantly reduces runtime and communication overhead.
流处理应用程序(spa)的控制机制以最小的运行时成本确保延迟界限,主要针对特定的基础设施,例如同构节点。随着物联网、雾和边缘计算的日益普及,spa更经常分布在异构基础设施上,这引发了对仍然考虑异构性的整体spa控制的需求。因此,我们通过延迟分布问题将各个控制机制结合起来,该问题寻求将延迟预算分发到分布式spa的单独管理组件,以实现轻量级但有效的端到端控制。为此,我们引入了分层控制体系结构,给出了延迟分布问题的正式定义,并提供了寻找最优解的ILP公式以及启发式方法,从而能够将单个控制机制组合到一个SPA中,同时确保全局成本最小。我们的评估表明,这两种解决方案都是有效的——虽然启发式方法只比最优的ILP解决方案稍微贵一点,但它显著减少了运行时和通信开销。
{"title":"Combining it all: Cost minimal and low-latency stream processing across distributed heterogeneous infrastructures","authors":"Henriette Röger, Sukanya Bhowmik, K. Rothermel","doi":"10.1145/3361525.3361551","DOIUrl":"https://doi.org/10.1145/3361525.3361551","url":null,"abstract":"Control mechanisms of stream processing applications (SPAs) that ensure latency bounds at minimal runtime cost mostly target a specific infrastructure, e.g., homogeneous nodes. With the growing popularity of the Internet of Things, fog, and edge computing, SPAs are more often distributed on heterogeneous infrastructures, triggering the need for a holistic SPA-control that still considers heterogeneity. We therefore combine individual control mechanisms via the latency-distribution problem that seeks to distribute latency budgets to individually managed components of distributed SPAs for a lightweight yet effective end-to-end control. To this end, we introduce a hierarchical control architecture, give a formal definition of the latency-distribution problem, and provide both an ILP formulation to find an optimal solution as well as a heuristic approach, thereby enabling the combination of individual control mechanisms into one SPA while ensuring global cost minimality. Our evaluations show that both solutions are effective---while the heuristic approach is only slightly more costly than the optimal ILP solution, it significantly reduces runtime and communication overhead.","PeriodicalId":381253,"journal":{"name":"Proceedings of the 20th International Middleware Conference","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117156894","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
EnclaveCache EnclaveCache
Pub Date : 2019-12-09 DOI: 10.1145/3361525.3361533
Lixia Chen, Jian Li, Ruhui Ma, Haibing Guan, Hans A. Jacobsen
With in-memory key-value caches such as Redis and Memcached being a key component for many systems to improve throughput and reduce latency, cloud caches have been widely adopted for small companies to deploy their own cache systems. However, data security is still a major concern, which affects the adoption of cloud caches. Tenant's data stored in a multi-tenant cloud environment faces threats from both co-located other tenants, as well as the untrusted cloud provider. We proposed EnclaveCache, which is a multi-tenant key-value cache that provides data confidentiality and privacy leveraging Intel Software Guard Extensions (SGX). Enclave-Cache utilizes multiple SGX enclaves to enforce data isolation among co-located tenants. With a carefully designed key distribution procedure, EnclaveCache ensures that a tenant-specific encryption key is securely guarded by an enclave to perform cryptography operations towards tenant's data. Experimental results show that EnclaveCache achieves comparable performance to traditional key-value caches (with secure communication) with a performance overhead of 13% while ensuring security guarantees and better scalability.
{"title":"EnclaveCache","authors":"Lixia Chen, Jian Li, Ruhui Ma, Haibing Guan, Hans A. Jacobsen","doi":"10.1145/3361525.3361533","DOIUrl":"https://doi.org/10.1145/3361525.3361533","url":null,"abstract":"With in-memory key-value caches such as Redis and Memcached being a key component for many systems to improve throughput and reduce latency, cloud caches have been widely adopted for small companies to deploy their own cache systems. However, data security is still a major concern, which affects the adoption of cloud caches. Tenant's data stored in a multi-tenant cloud environment faces threats from both co-located other tenants, as well as the untrusted cloud provider. We proposed EnclaveCache, which is a multi-tenant key-value cache that provides data confidentiality and privacy leveraging Intel Software Guard Extensions (SGX). Enclave-Cache utilizes multiple SGX enclaves to enforce data isolation among co-located tenants. With a carefully designed key distribution procedure, EnclaveCache ensures that a tenant-specific encryption key is securely guarded by an enclave to perform cryptography operations towards tenant's data. Experimental results show that EnclaveCache achieves comparable performance to traditional key-value caches (with secure communication) with a performance overhead of 13% while ensuring security guarantees and better scalability.","PeriodicalId":381253,"journal":{"name":"Proceedings of the 20th International Middleware Conference","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126664305","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
SlimGuard SlimGuard
Pub Date : 2019-12-09 DOI: 10.1145/3361525.3361532
Beichen Liu, Pierre Olivier, B. Ravindran
Attacks on the heap are an increasingly severe threat. State-of-the-art secure dynamic memory allocators can offer protection, however their memory footprint is high, making them suboptimal in many situations. We introduce Slim-Guard, a secure allocator whose design is driven by memory efficiency. Among other features, SlimGuard uses an efficient fine-grain size classes indexing mechanism and implements a novel dynamic canary scheme. It offers a low memory overhead due its size classes optimized for canary usage, its on-demand metadata allocation, and the combination of randomized allocations and over-provisioning into a single memory efficient security feature. SlimGuard protects against widespread heap-related attacks such as overflows, over-reads, double/invalid free, and use-after-free. Evaluation over a wide range of applications shows that it offers a significant reduction in memory consumption compared to the state-of-the-art secure allocator (up to 2x in macro-benchmarks), while offering similar or better security guarantees and good performance.
{"title":"SlimGuard","authors":"Beichen Liu, Pierre Olivier, B. Ravindran","doi":"10.1145/3361525.3361532","DOIUrl":"https://doi.org/10.1145/3361525.3361532","url":null,"abstract":"Attacks on the heap are an increasingly severe threat. State-of-the-art secure dynamic memory allocators can offer protection, however their memory footprint is high, making them suboptimal in many situations. We introduce Slim-Guard, a secure allocator whose design is driven by memory efficiency. Among other features, SlimGuard uses an efficient fine-grain size classes indexing mechanism and implements a novel dynamic canary scheme. It offers a low memory overhead due its size classes optimized for canary usage, its on-demand metadata allocation, and the combination of randomized allocations and over-provisioning into a single memory efficient security feature. SlimGuard protects against widespread heap-related attacks such as overflows, over-reads, double/invalid free, and use-after-free. Evaluation over a wide range of applications shows that it offers a significant reduction in memory consumption compared to the state-of-the-art secure allocator (up to 2x in macro-benchmarks), while offering similar or better security guarantees and good performance.","PeriodicalId":381253,"journal":{"name":"Proceedings of the 20th International Middleware Conference","volume":"90 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122967161","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
MooD: MObility Data Privacy as Orphan Disease: Experimentation and Deployment Paper 作为孤儿病的移动数据隐私:实验和部署论文
Pub Date : 2019-12-09 DOI: 10.1145/3361525.3361542
Besma Khalfoun, Mohamed Maouche, Sonia Ben Mokhtar, S. Bouchenak
With the increasing development of handheld devices, Location Based Services (LBSs) became very popular in facilitating users' daily life with a broad range of applications (e.g. traffic monitoring, geo-located search, geo-gaming). However, several studies have shown that the collected mobility data may reveal sensitive information about end-users such as their home and workplaces, their gender, political, religious or sexual preferences. To overcome these threats, many Location Privacy Protection Mechanisms (LPPMs) were proposed in the literature. While the existing LPPMs try to protect most of the users in mobility datasets, there is usually a subset of users who are not protected by any of the existing LPPMs. By analogy to medical research, there are orphan diseases, for which the medical community is still looking for a remedy. In this paper, we present MooD, a fine-grained multi-LPPM user-centric solution whose main objective is to find a treatment to mobile users' orphan disease by protecting them from re-identification attacks. Our experiments are conducted on four real world datasets. The results show that MooD outperforms its competitors, and the amount of user mobility data it is able to protect is in the range between 97.5% to 100% on the various datasets.
随着手持设备的日益发展,基于位置的服务(lbs)在方便用户日常生活方面变得非常流行,其应用范围广泛(例如交通监控、地理定位搜索、地理游戏)。然而,几项研究表明,收集到的移动数据可能会泄露有关最终用户的敏感信息,如他们的家庭和工作场所、性别、政治、宗教或性偏好。为了克服这些威胁,文献中提出了许多位置隐私保护机制(LPPMs)。虽然现有的lppm试图保护移动数据集中的大多数用户,但通常有一部分用户不受任何现有lppm的保护。与医学研究类似,有一些孤儿病,医学界仍在寻找治疗方法。在本文中,我们提出了MooD,这是一个以用户为中心的细粒度多lppm解决方案,其主要目标是通过保护移动用户免受重新识别攻击来找到治疗孤儿病的方法。我们的实验是在四个真实世界的数据集上进行的。结果表明,MooD优于其竞争对手,在各种数据集上,它能够保护的用户移动数据量在97.5%到100%之间。
{"title":"MooD: MObility Data Privacy as Orphan Disease: Experimentation and Deployment Paper","authors":"Besma Khalfoun, Mohamed Maouche, Sonia Ben Mokhtar, S. Bouchenak","doi":"10.1145/3361525.3361542","DOIUrl":"https://doi.org/10.1145/3361525.3361542","url":null,"abstract":"With the increasing development of handheld devices, Location Based Services (LBSs) became very popular in facilitating users' daily life with a broad range of applications (e.g. traffic monitoring, geo-located search, geo-gaming). However, several studies have shown that the collected mobility data may reveal sensitive information about end-users such as their home and workplaces, their gender, political, religious or sexual preferences. To overcome these threats, many Location Privacy Protection Mechanisms (LPPMs) were proposed in the literature. While the existing LPPMs try to protect most of the users in mobility datasets, there is usually a subset of users who are not protected by any of the existing LPPMs. By analogy to medical research, there are orphan diseases, for which the medical community is still looking for a remedy. In this paper, we present MooD, a fine-grained multi-LPPM user-centric solution whose main objective is to find a treatment to mobile users' orphan disease by protecting them from re-identification attacks. Our experiments are conducted on four real world datasets. The results show that MooD outperforms its competitors, and the amount of user mobility data it is able to protect is in the range between 97.5% to 100% on the various datasets.","PeriodicalId":381253,"journal":{"name":"Proceedings of the 20th International Middleware Conference","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127716973","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
PrivaTube: Privacy-Preserving Edge-Assisted Video Streaming PrivaTube:保护隐私的边缘辅助视频流
Pub Date : 2019-12-09 DOI: 10.1145/3361525.3361546
Simon Da Silva, Sonia Ben Mokhtar, Stefan Contiu, D. Négru, Laurent Réveillère, E. Rivière
Video on Demand (VoD) streaming is the largest source of Internet traffic. Efficient and scalable VoD requires Content Delivery Networks (CDNs) whose cost are prohibitive for many providers. An alternative is to cache and serve video content using end-users devices. Direct connections between these devices complement the resources of core VoD servers with an edge-assisted collaborative CDN. VoD access histories can reveal critical personal information, and centralized VoD solutions are notorious for exploiting personal data. Hiding the interests of users from servers and edge-assisting devices is necessary for a new generation of privacy-preserving VoD services. We introduce PrivaTube, a scalable and cost-effective VoD solution. PrivaTube aggregates video content from multiple servers and edge peers to offer a high Quality of Experience (QoE) for its users. It enables privacy preservation at all levels of the content distribution process. It leverages Trusted Execution Environments (TEEs) at servers and clients, and obfuscates access patterns using fake requests that reduce the risk of personal information leaks. Fake requests are further leveraged to implement proactive provisioning and improve QoE. Our evaluation of a complete prototype shows that PrivaTube reduces the load on servers and increases QoE while providing strong privacy guarantees.
视频点播(VoD)流媒体是互联网流量的最大来源。高效和可扩展的视频点播需要内容交付网络(cdn),其成本对许多提供商来说是令人望而却步的。另一种选择是使用终端用户设备缓存和提供视频内容。这些设备之间的直接连接通过边缘辅助的协作CDN补充了核心VoD服务器的资源。视频点播访问历史可以泄露关键的个人信息,而集中式视频点播解决方案因利用个人数据而臭名昭著。对服务器和边缘辅助设备隐藏用户的兴趣是新一代保护隐私的视频点播服务的必要条件。我们推出了PrivaTube,这是一个可扩展且具有成本效益的视频点播解决方案。PrivaTube聚合来自多个服务器和边缘节点的视频内容,为用户提供高质量的体验(QoE)。它可以在内容分发过程的所有级别上保护隐私。它在服务器和客户机上利用可信执行环境(tee),并使用虚假请求混淆访问模式,从而降低个人信息泄露的风险。假请求被进一步利用来实现主动供应和改进QoE。我们对一个完整原型的评估表明,PrivaTube减少了服务器上的负载,提高了QoE,同时提供了强大的隐私保证。
{"title":"PrivaTube: Privacy-Preserving Edge-Assisted Video Streaming","authors":"Simon Da Silva, Sonia Ben Mokhtar, Stefan Contiu, D. Négru, Laurent Réveillère, E. Rivière","doi":"10.1145/3361525.3361546","DOIUrl":"https://doi.org/10.1145/3361525.3361546","url":null,"abstract":"Video on Demand (VoD) streaming is the largest source of Internet traffic. Efficient and scalable VoD requires Content Delivery Networks (CDNs) whose cost are prohibitive for many providers. An alternative is to cache and serve video content using end-users devices. Direct connections between these devices complement the resources of core VoD servers with an edge-assisted collaborative CDN. VoD access histories can reveal critical personal information, and centralized VoD solutions are notorious for exploiting personal data. Hiding the interests of users from servers and edge-assisting devices is necessary for a new generation of privacy-preserving VoD services. We introduce PrivaTube, a scalable and cost-effective VoD solution. PrivaTube aggregates video content from multiple servers and edge peers to offer a high Quality of Experience (QoE) for its users. It enables privacy preservation at all levels of the content distribution process. It leverages Trusted Execution Environments (TEEs) at servers and clients, and obfuscates access patterns using fake requests that reduce the risk of personal information leaks. Fake requests are further leveraged to implement proactive provisioning and improve QoE. Our evaluation of a complete prototype shows that PrivaTube reduces the load on servers and increases QoE while providing strong privacy guarantees.","PeriodicalId":381253,"journal":{"name":"Proceedings of the 20th International Middleware Conference","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126578341","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Automating Multi-level Performance Elastic Components for IBM Streams 自动化IBM流的多级性能弹性组件
Pub Date : 2019-12-09 DOI: 10.1145/3361525.3361544
Xiang Ni, S. Schneider, Raju Pavuluri, Jonathan Kaus, Kun-Lung Wu
Streaming applications exhibit abundant opportunities for pipeline parallelism, data parallelism and task parallelism. Prior work in IBM Streams introduced an elastic threading model that sought the best performance by automatically tuning the number of threads. In this paper, we introduce the ability to automatically discover where that threading model is profitable. However this introduces a new challenge: we have separate performance elastic mechanisms that are designed with different objectives, leading to potential negative interactions and unintended performance degradation. We present our experiences in overcoming these challenges by showing how to coordinate separate but interfering elasticity mechanisms to maxmize performance gains with stable and fast parallelism exploitation. We first describe an elastic performance mechanism that automatically adapts different threading models to different regions of an application. We then show a coherent ecosystem for coordinating this threading model elasticty with thread count elasticity. This system is an online, stable multi-level elastic coordination scheme that adapts different regions of a streaming application to different threading models and number of threads. We implemented this multi-level coordination scheme in IBM Streams and demonstrated that it (a) scales to over a hundred threads; (b) can improve performance by an order of magnitude on two different processor architectures when an application can benefit from multiple threading models; and (c) achieves performance comparable to hand-optimized applications but with much fewer threads.
流应用程序展示了管道并行、数据并行和任务并行的大量机会。IBM Streams之前的工作引入了一个弹性线程模型,该模型通过自动调优线程数量来寻求最佳性能。在本文中,我们引入了自动发现线程模型在哪些地方是有益的能力。然而,这带来了一个新的挑战:我们有单独的性能弹性机制,它们被设计为不同的目标,导致潜在的负面交互和意外的性能下降。我们展示了克服这些挑战的经验,展示了如何协调独立但相互干扰的弹性机制,从而通过稳定和快速的并行性利用最大化性能收益。我们首先描述了一种弹性性能机制,它可以自动将不同的线程模型适应应用程序的不同区域。然后,我们展示了一个协调线程模型弹性和线程数弹性的连贯生态系统。该系统是一种在线的、稳定的多级弹性协调方案,能够适应流应用程序的不同区域,适应不同的线程模型和线程数。我们在IBM Streams中实现了这个多级协调方案,并证明了它(a)可以扩展到100多个线程;(b)当应用程序受益于多线程模型时,可以在两种不同的处理器架构上提高一个数量级的性能;(c)实现与手动优化应用程序相当的性能,但线程要少得多。
{"title":"Automating Multi-level Performance Elastic Components for IBM Streams","authors":"Xiang Ni, S. Schneider, Raju Pavuluri, Jonathan Kaus, Kun-Lung Wu","doi":"10.1145/3361525.3361544","DOIUrl":"https://doi.org/10.1145/3361525.3361544","url":null,"abstract":"Streaming applications exhibit abundant opportunities for pipeline parallelism, data parallelism and task parallelism. Prior work in IBM Streams introduced an elastic threading model that sought the best performance by automatically tuning the number of threads. In this paper, we introduce the ability to automatically discover where that threading model is profitable. However this introduces a new challenge: we have separate performance elastic mechanisms that are designed with different objectives, leading to potential negative interactions and unintended performance degradation. We present our experiences in overcoming these challenges by showing how to coordinate separate but interfering elasticity mechanisms to maxmize performance gains with stable and fast parallelism exploitation. We first describe an elastic performance mechanism that automatically adapts different threading models to different regions of an application. We then show a coherent ecosystem for coordinating this threading model elasticty with thread count elasticity. This system is an online, stable multi-level elastic coordination scheme that adapts different regions of a streaming application to different threading models and number of threads. We implemented this multi-level coordination scheme in IBM Streams and demonstrated that it (a) scales to over a hundred threads; (b) can improve performance by an order of magnitude on two different processor architectures when an application can benefit from multiple threading models; and (c) achieves performance comparable to hand-optimized applications but with much fewer threads.","PeriodicalId":381253,"journal":{"name":"Proceedings of the 20th International Middleware Conference","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124163466","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Medley: A Novel Distributed Failure Detector for IoT Networks Medley:一种新的物联网网络分布式故障检测器
Pub Date : 2019-12-09 DOI: 10.1145/3361525.3361556
Rui Yang, Shichu Zhu, Yifei Li, Indranil Gupta
Efficient and correct operation of an IoT network requires the presence of a failure detector and membership protocol amongst the IoT nodes. This paper presents a new failure detector for IoT settings where nodes are connected via a wireless ad-hoc network. This failure detector, which we name Medley, is fully decentralized, allows IoT nodes to maintain a local membership list of other alive nodes, detects failures quickly (and updates the membership list), and incurs low communication overhead in the underlying ad-hoc network. In order to minimize detection time and communication, we adapt a failure detector originally proposed for datacenters (SWIM), for the IoT environment. In Medley each node picks a medley of ping targets in a randomized and skewed manner, preferring nearer nodes. Via analysis and NS-3 simulation we show the right mix of pinging probabilities that simultaneously optimize detection time and communication traffic. We have also implemented Medley for Raspberry Pis, and present deployment results.
物联网网络的有效和正确运行需要在物联网节点之间存在故障检测器和成员协议。本文提出了一种新的故障检测器,用于通过无线自组织网络连接节点的物联网设置。这种故障检测器,我们将其命名为Medley,是完全分散的,允许物联网节点维护其他活动节点的本地成员列表,快速检测故障(并更新成员列表),并在底层自组织网络中产生低通信开销。为了最大限度地减少检测时间和通信,我们将最初为数据中心(SWIM)提出的故障检测器用于物联网环境。在Medley中,每个节点以随机和倾斜的方式选择ping目标的混合,更倾向于更近的节点。通过分析和NS-3模拟,我们展示了正确的ping概率组合,同时优化了检测时间和通信流量。我们还为Raspberry Pis实现了Medley,并给出了部署结果。
{"title":"Medley: A Novel Distributed Failure Detector for IoT Networks","authors":"Rui Yang, Shichu Zhu, Yifei Li, Indranil Gupta","doi":"10.1145/3361525.3361556","DOIUrl":"https://doi.org/10.1145/3361525.3361556","url":null,"abstract":"Efficient and correct operation of an IoT network requires the presence of a failure detector and membership protocol amongst the IoT nodes. This paper presents a new failure detector for IoT settings where nodes are connected via a wireless ad-hoc network. This failure detector, which we name Medley, is fully decentralized, allows IoT nodes to maintain a local membership list of other alive nodes, detects failures quickly (and updates the membership list), and incurs low communication overhead in the underlying ad-hoc network. In order to minimize detection time and communication, we adapt a failure detector originally proposed for datacenters (SWIM), for the IoT environment. In Medley each node picks a medley of ping targets in a randomized and skewed manner, preferring nearer nodes. Via analysis and NS-3 simulation we show the right mix of pinging probabilities that simultaneously optimize detection time and communication traffic. We have also implemented Medley for Raspberry Pis, and present deployment results.","PeriodicalId":381253,"journal":{"name":"Proceedings of the 20th International Middleware Conference","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114423593","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
期刊
Proceedings of the 20th International Middleware Conference
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1