Christoph Doblander, Simon Zimmermann, Kaiwen Zhang, H. Jacobsen
Shared dictionary compression is known as an efficient compression method for pub/sub. In practice, bandwidth reductions of more than 80% are achievable for JSON or XML data formats. Compared to other compression techniques such as GZip or Deate, a dictionary is needed to compress and decompress messages. Generating a dictionary is a CPU-expensive task and sharing it introduces bandwidth overheads. Furthermore, the dictionary is continuously maintained to keep the compression performance high. We developed MOS: a cross-platform middleware for managing shared dictionary compression in pub/sub. This includes dictionary propagation, compression/decompression, and periodic maintenance. We provide a developer API to interact with the MQTT-based pub/sub infrastructure. Our demo shows an example application built on top of MOS which shows the performance of the shared dictionary compression scheme.
{"title":"Demo Abstract: MOS: A Bandwidth-Efficient Cross-Platform Middleware for Publish/Subscribe","authors":"Christoph Doblander, Simon Zimmermann, Kaiwen Zhang, H. Jacobsen","doi":"10.1145/3007592.3007607","DOIUrl":"https://doi.org/10.1145/3007592.3007607","url":null,"abstract":"Shared dictionary compression is known as an efficient compression method for pub/sub. In practice, bandwidth reductions of more than 80% are achievable for JSON or XML data formats. Compared to other compression techniques such as GZip or Deate, a dictionary is needed to compress and decompress messages. Generating a dictionary is a CPU-expensive task and sharing it introduces bandwidth overheads. Furthermore, the dictionary is continuously maintained to keep the compression performance high. We developed MOS: a cross-platform middleware for managing shared dictionary compression in pub/sub. This includes dictionary propagation, compression/decompression, and periodic maintenance. We provide a developer API to interact with the MQTT-based pub/sub infrastructure. Our demo shows an example application built on top of MOS which shows the performance of the shared dictionary compression scheme.","PeriodicalId":125362,"journal":{"name":"Proceedings of the Posters and Demos Session of the 17th International Middleware Conference","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127814291","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Many data-driven applications require mechanisms for processing interconnected or graph-based data sets. Several platforms exist for offline processing of such data and fewer solutions address online computations on dynamic graphs. We combined a modified actor model, an event-sourced persistence layer, and a vertex-based, asynchronous programming model in order to unify event-driven and graph-based computations. Our distributed chronograph platform supports both near-realtime and batch computations on dynamic, event-driven graph topologies, and enables full history tracking of the evolving graphs over time.
{"title":"Chronograph: A Distributed Platform for Event-Sourced Graph Computing","authors":"Benjamin Erb, F. Kargl","doi":"10.1145/3007592.3007601","DOIUrl":"https://doi.org/10.1145/3007592.3007601","url":null,"abstract":"Many data-driven applications require mechanisms for processing interconnected or graph-based data sets. Several platforms exist for offline processing of such data and fewer solutions address online computations on dynamic graphs. We combined a modified actor model, an event-sourced persistence layer, and a vertex-based, asynchronous programming model in order to unify event-driven and graph-based computations. Our distributed chronograph platform supports both near-realtime and batch computations on dynamic, event-driven graph topologies, and enables full history tracking of the evolving graphs over time.","PeriodicalId":125362,"journal":{"name":"Proceedings of the Posters and Demos Session of the 17th International Middleware Conference","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115067493","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
With the advent of the Internet of Things and the availability of devices with low processing power, the management, communication and programming efforts face new requirements. In this paper, we present RConnected, a middleware that allows autonomous interaction of those devices with nearby users, allowing the provisioning of services and facilitating the development of mobile applications.
{"title":"RConnected: a middleware for Mobile Services in IoT Environments","authors":"M. Carvalho, João Nuno Silva","doi":"10.1145/3007592.3007605","DOIUrl":"https://doi.org/10.1145/3007592.3007605","url":null,"abstract":"With the advent of the Internet of Things and the availability of devices with low processing power, the management, communication and programming efforts face new requirements. In this paper, we present RConnected, a middleware that allows autonomous interaction of those devices with nearby users, allowing the provisioning of services and facilitating the development of mobile applications.","PeriodicalId":125362,"journal":{"name":"Proceedings of the Posters and Demos Session of the 17th International Middleware Conference","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115674501","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Masoud Hemmatpour, B. Montrucchio, M. Rebaudengo, Mohammad Sadoghi
Traditional database systems either sacrifice availability or partitionability at the cost of offering strict consistency guarantee of data. However, the significant growth of Web-scale applications and the wider array of emerging workloads demand revisiting the need for full transactional consistency. One new dominant class of workload is the ability to efficiently support single statement transaction consisting of either Get or Put operation; thus, simplifying the consistency model. These simple workloads have given rise to decade-long efforts for building efficient key-value stores that often rely on disk-resident and log-structured storage model that is distributed across many machines. To further expand the scope of key-value stores, in this paper, we introduce Kanzi, a distributed, in-memory key-value stored over shared-memory architecture enabled by remote direct memory access (RDMA) technology. The simple data and transaction model of our proposed Kanzi additionally may serve as a generic (embedded) caching layer to speed up any disk-resident data-intensive workloads.
{"title":"Kanzi: A Distributed, In-memory Key-Value Store","authors":"Masoud Hemmatpour, B. Montrucchio, M. Rebaudengo, Mohammad Sadoghi","doi":"10.1145/3007592.3007594","DOIUrl":"https://doi.org/10.1145/3007592.3007594","url":null,"abstract":"Traditional database systems either sacrifice availability or partitionability at the cost of offering strict consistency guarantee of data. However, the significant growth of Web-scale applications and the wider array of emerging workloads demand revisiting the need for full transactional consistency. One new dominant class of workload is the ability to efficiently support single statement transaction consisting of either Get or Put operation; thus, simplifying the consistency model. These simple workloads have given rise to decade-long efforts for building efficient key-value stores that often rely on disk-resident and log-structured storage model that is distributed across many machines. To further expand the scope of key-value stores, in this paper, we introduce Kanzi, a distributed, in-memory key-value stored over shared-memory architecture enabled by remote direct memory access (RDMA) technology. The simple data and transaction model of our proposed Kanzi additionally may serve as a generic (embedded) caching layer to speed up any disk-resident data-intensive workloads.","PeriodicalId":125362,"journal":{"name":"Proceedings of the Posters and Demos Session of the 17th International Middleware Conference","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114847376","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sophie Cerf, B. Robu, N. Marchand, A. Boutet, Vincent Primault, Sonia Ben Mokhtar, S. Bouchenak
The widespread adoption of Location-Based Services (LBSs) has come with controversy about privacy. While leveraging location information leads to improving services through geo-contextualization, it rises privacy concerns as new knowledge can be inferred from location records, such as home/work places, habits or religious beliefs. To overcome this problem, several Location Privacy Protection Mechanisms (LPPMs) have been proposed in the literature these last years. However, every mechanism comes with its own configuration parameters that directly impact the privacy guarantees and the resulting utility of protected data. In this context, it can be difficult for a non-expert system designer to choose appropriate configuration parameters to use according to the expected privacy and utility. In this paper, we present a framework enabling the easy configuration of LPPMs. To achieve that, our framework performs an offline, in-depth automated analysis of LPPMs to provide the formal relationship between their configuration parameters and both privacy and the utility metrics. This framework is modular: by using different metrics, a system designer is able to fine-tune her LPPM according to her expected privacy and utility guarantees (i.e., the guarantee itself and the level of this guarantee). To illustrate the capability of our framework, we analyse Geo-Indistinguishability (a well known differentially private LPPM) and we provide the formal relationship between its ϵ configuration parameter and two privacy and utility metrics.
{"title":"Toward an Easy Configuration of Location Privacy Protection Mechanisms","authors":"Sophie Cerf, B. Robu, N. Marchand, A. Boutet, Vincent Primault, Sonia Ben Mokhtar, S. Bouchenak","doi":"10.1145/3007592.3007599","DOIUrl":"https://doi.org/10.1145/3007592.3007599","url":null,"abstract":"The widespread adoption of Location-Based Services (LBSs) has come with controversy about privacy. While leveraging location information leads to improving services through geo-contextualization, it rises privacy concerns as new knowledge can be inferred from location records, such as home/work places, habits or religious beliefs. To overcome this problem, several Location Privacy Protection Mechanisms (LPPMs) have been proposed in the literature these last years. However, every mechanism comes with its own configuration parameters that directly impact the privacy guarantees and the resulting utility of protected data. In this context, it can be difficult for a non-expert system designer to choose appropriate configuration parameters to use according to the expected privacy and utility. In this paper, we present a framework enabling the easy configuration of LPPMs. To achieve that, our framework performs an offline, in-depth automated analysis of LPPMs to provide the formal relationship between their configuration parameters and both privacy and the utility metrics. This framework is modular: by using different metrics, a system designer is able to fine-tune her LPPM according to her expected privacy and utility guarantees (i.e., the guarantee itself and the level of this guarantee). To illustrate the capability of our framework, we analyse Geo-Indistinguishability (a well known differentially private LPPM) and we provide the formal relationship between its ϵ configuration parameter and two privacy and utility metrics.","PeriodicalId":125362,"journal":{"name":"Proceedings of the Posters and Demos Session of the 17th International Middleware Conference","volume":"120 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116246324","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this poster abstract, we envision the evolution of the scheduler of the Tasklet system from a centralized to a distributed approach. The Tasklet system is a middleware for distributed applications that allows developers to offload computation to remote resources via self-contained units of computation -- the so-called Tasklets. The current implementation of the Tasklet scheduler is based on a broker overlay network where one broker centrally manages a pool of resources. While this allows for a central control and a consistent global view on the resources in the system, this architecture involves the risk of performance bottlenecks which can be avoided by a decentralized resource management. This poster discusses three contributions. First, we present the Tasklet system and the current centralized scheduling algorithm. Second, we sketch a hybrid resource management that uses cache lists to avoid redundant communication between resource consumers and resource brokers. Finally, we propose a three-level scheduling architecture.
{"title":"Decentralized Scheduling for Tasklets","authors":"Janick Edinger, Dominik Schäfer, C. Becker","doi":"10.1145/3007592.3007597","DOIUrl":"https://doi.org/10.1145/3007592.3007597","url":null,"abstract":"In this poster abstract, we envision the evolution of the scheduler of the Tasklet system from a centralized to a distributed approach. The Tasklet system is a middleware for distributed applications that allows developers to offload computation to remote resources via self-contained units of computation -- the so-called Tasklets. The current implementation of the Tasklet scheduler is based on a broker overlay network where one broker centrally manages a pool of resources. While this allows for a central control and a consistent global view on the resources in the system, this architecture involves the risk of performance bottlenecks which can be avoided by a decentralized resource management. This poster discusses three contributions. First, we present the Tasklet system and the current centralized scheduling algorithm. Second, we sketch a hybrid resource management that uses cache lists to avoid redundant communication between resource consumers and resource brokers. Finally, we propose a three-level scheduling architecture.","PeriodicalId":125362,"journal":{"name":"Proceedings of the Posters and Demos Session of the 17th International Middleware Conference","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128663508","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Ruta, F. Scioscia, E. Bove, A. Cinquepalmi, E. Sciascio
This paper presents a knowledge-based approach for resource discovery, allotment and sharing in distributed pervasive scenarios. The proposed framework enables semantic-based resource retrieval exploiting non-standard inferences and a novel method for ontology dissemination and rebuilding. The approach can enhance any publish/subscribe message-oriented middleware. A prototype was implemented and tested to prove correctness of the approach and get early performance evaluation.
{"title":"A Semantic-based Approach for Resource Discovery and Allocation in Distributed Middleware","authors":"M. Ruta, F. Scioscia, E. Bove, A. Cinquepalmi, E. Sciascio","doi":"10.1145/3007592.3007604","DOIUrl":"https://doi.org/10.1145/3007592.3007604","url":null,"abstract":"This paper presents a knowledge-based approach for resource discovery, allotment and sharing in distributed pervasive scenarios. The proposed framework enables semantic-based resource retrieval exploiting non-standard inferences and a novel method for ontology dissemination and rebuilding. The approach can enhance any publish/subscribe message-oriented middleware. A prototype was implemented and tested to prove correctness of the approach and get early performance evaluation.","PeriodicalId":125362,"journal":{"name":"Proceedings of the Posters and Demos Session of the 17th International Middleware Conference","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128525876","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This demonstration showcases SEMComm, an Android application that allows an individual patient's personal device (e.g., smartphone) to collect health data from nearby medical IoT devices and to share pieces of medical records with the devices of nearby medical personnel (e.g., doctors and nurses) using direct device-to-device (D2D) links. SEMComm uses XD, a middleware that enables device discovery, context sharing, and data transmission using heterogeneous D2D communication technologies. Current approaches for sharing electronic medical records use onerous HIPAA-compliant cloud-based solutions that are costly for hospitals and require patients to release sensitive medical records to an external server. SEMComm allows patients to maintain fine-grained control over who has access to their electronic medical data, while simultaneously allowing the patient's record to collect data from multiple medical devices all without the need for an external network or cloud storage. Our demonstration shows how XD enables SEMComm to collect data from a blood pressure cuff and a heart-rate monitor and then to share medical data with neighboring devices using a mixed set of D2D communication links.
{"title":"SEMComm: Sharing Electronic Medical Records using Device to Device Communication","authors":"T. Kalbarczyk, C. Julien","doi":"10.1145/3007592.3007609","DOIUrl":"https://doi.org/10.1145/3007592.3007609","url":null,"abstract":"This demonstration showcases SEMComm, an Android application that allows an individual patient's personal device (e.g., smartphone) to collect health data from nearby medical IoT devices and to share pieces of medical records with the devices of nearby medical personnel (e.g., doctors and nurses) using direct device-to-device (D2D) links. SEMComm uses XD, a middleware that enables device discovery, context sharing, and data transmission using heterogeneous D2D communication technologies. Current approaches for sharing electronic medical records use onerous HIPAA-compliant cloud-based solutions that are costly for hospitals and require patients to release sensitive medical records to an external server. SEMComm allows patients to maintain fine-grained control over who has access to their electronic medical data, while simultaneously allowing the patient's record to collect data from multiple medical devices all without the need for an external network or cloud storage. Our demonstration shows how XD enables SEMComm to collect data from a blood pressure cuff and a heart-rate monitor and then to share medical data with neighboring devices using a mixed set of D2D communication links.","PeriodicalId":125362,"journal":{"name":"Proceedings of the Posters and Demos Session of the 17th International Middleware Conference","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133476074","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Spatiotemporal context is crucial in modern mobile applications that utilize increasing amounts of context to better predict events and user behaviors, requiring rich records of users’ or devices’ spatiotemporal histories [2, 3, 12]. The increasing concerns about contextual data privacy, and specifically location privacy [9] motivate onloading [8], or moving storage and processing of data onto the device, to prevent revealing potentially sensitive user information. This demo showcases the PACO (Programming Abstraction for Contextual Onloading) middleware, which is designed to support onloading large amounts of contextual data to the mobile devices that generate data; the onloading is motivated both by a need to preserve user privacy and by a desire to reduce a constant data connection to continuously store spatiotemporal data at some third-party central service. The PACO middleware maintains a database on-device and exposes an application-facing API that provides flexible query operations that can be performed over a user’s historical spatiotemporal data. Through access profiles, users can control the lossiness of the queries that are used by other applications and for possible cloud offload. The PACO system model is depicted in Figure 1. In PACO a data point is stored as timestamped location data and represents some ”observation” (captured as a linked piece of context data) of a given space at a given time. PACO models a data point as having a region of influence which can best be visualized as a heat map with intensity decaying as spatial and temporal distance increases from the point of observation. To realize this view of spatiotemporal data, PACO leverages previous work in spatiotemporal data storage [6, 11]; specifically, PACO uses both a 3-dimensional R-Tree [7] and a k-d Tree [1] to efficiently index its data points. In this demo, the PACO data points represent a tourist’s observations of predefined points of interest. PACO supports queries across ranges of space, time, or the combination of the two. The basic PACO query computes the aggregate influence of all points, called the probability of knowledge (PoK), for the spatiotemporal region in
时空上下文在现代移动应用中至关重要,这些应用利用越来越多的上下文来更好地预测事件和用户行为,需要丰富的用户或设备时空历史记录[2,3,12]。对上下文数据隐私,特别是位置隐私[9]的日益关注促使了数据的上传[8],或将数据的存储和处理转移到设备上,以防止泄露潜在的敏感用户信息。这个演示展示了PACO (Programming Abstraction for Contextual Onloading)中间件,它被设计用来支持将大量上下文数据上传到生成数据的移动设备;加载的动机是保护用户隐私的需要,以及减少在某些第三方中央服务中持续存储时空数据的持续数据连接的愿望。PACO中间件维护设备上的数据库,并公开面向应用程序的API,该API提供可对用户的历史时空数据执行的灵活查询操作。通过访问配置文件,用户可以控制其他应用程序使用的查询的损耗,以及可能的云卸载。PACO系统模型如图1所示。在PACO中,数据点存储为带有时间戳的位置数据,并表示在给定时间对给定空间的一些“观察”(作为上下文数据的链接片段捕获)。PACO将一个数据点建模为具有影响区域的数据点,该影响区域最好以热图的形式呈现,热图的强度随着距观测点的空间和时间距离的增加而衰减。为了实现这种时空数据视图,PACO利用了以前在时空数据存储方面的工作[6,11];具体来说,PACO使用三维r树[7]和k-d树[1]来有效地索引其数据点。在本演示中,PACO数据点表示游客对预定义兴趣点的观察。PACO支持跨空间、时间范围或两者组合的查询。基本的PACO查询计算所有点的总影响,称为知识概率(PoK),对于空间中的时空区域
{"title":"SpatioTemporal Traveler","authors":"N. Wendt, C. Julien","doi":"10.1145/3007592.3007608","DOIUrl":"https://doi.org/10.1145/3007592.3007608","url":null,"abstract":"Spatiotemporal context is crucial in modern mobile applications that utilize increasing amounts of context to better predict events and user behaviors, requiring rich records of users’ or devices’ spatiotemporal histories [2, 3, 12]. The increasing concerns about contextual data privacy, and specifically location privacy [9] motivate onloading [8], or moving storage and processing of data onto the device, to prevent revealing potentially sensitive user information. This demo showcases the PACO (Programming Abstraction for Contextual Onloading) middleware, which is designed to support onloading large amounts of contextual data to the mobile devices that generate data; the onloading is motivated both by a need to preserve user privacy and by a desire to reduce a constant data connection to continuously store spatiotemporal data at some third-party central service. The PACO middleware maintains a database on-device and exposes an application-facing API that provides flexible query operations that can be performed over a user’s historical spatiotemporal data. Through access profiles, users can control the lossiness of the queries that are used by other applications and for possible cloud offload. The PACO system model is depicted in Figure 1. In PACO a data point is stored as timestamped location data and represents some ”observation” (captured as a linked piece of context data) of a given space at a given time. PACO models a data point as having a region of influence which can best be visualized as a heat map with intensity decaying as spatial and temporal distance increases from the point of observation. To realize this view of spatiotemporal data, PACO leverages previous work in spatiotemporal data storage [6, 11]; specifically, PACO uses both a 3-dimensional R-Tree [7] and a k-d Tree [1] to efficiently index its data points. In this demo, the PACO data points represent a tourist’s observations of predefined points of interest. PACO supports queries across ranges of space, time, or the combination of the two. The basic PACO query computes the aggregate influence of all points, called the probability of knowledge (PoK), for the spatiotemporal region in","PeriodicalId":125362,"journal":{"name":"Proceedings of the Posters and Demos Session of the 17th International Middleware Conference","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122081176","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Dominik Schäfer, Janick Edinger, C. Becker, Martin Breitbach
This demo paper introduces a middleware for distributed computation applications -- the Tasklet system. The Tasklet system allows developers to execute self-contained units of computation -- the so-called Tasklets -- in a pool of heterogeneous computing devices, including desktop computers, cloud resources, mobile devices, and graphical processing units. In this demonstration of the Tasklet system, we visualize the otherwise transparent process of computation offloading, starting from the development of an application until the actual distributed execution of tasks. While existing systems have high setup costs the Tasklet system emphasizes the ease of use and a seamless integration of various heterogeneous devices. In the demonstration, we focus on three key benefits of the Tasklet system. First, we demonstrate the usability of the system by live developing a distributed computing application in less than ten minutes. Second, we show how heterogeneous devices can be set up and join the resource pool during the execution of Tasklets. With a monitoring tool we visualize how the computational workload is split up among these resources. Third, we introduce the concept of quality of computation to tailor the otherwise generic computing framework to the requirements of individual applications.
{"title":"Writing a Distributed Computing Application in 7 Minutes with Tasklets","authors":"Dominik Schäfer, Janick Edinger, C. Becker, Martin Breitbach","doi":"10.1145/3007592.3007606","DOIUrl":"https://doi.org/10.1145/3007592.3007606","url":null,"abstract":"This demo paper introduces a middleware for distributed computation applications -- the Tasklet system. The Tasklet system allows developers to execute self-contained units of computation -- the so-called Tasklets -- in a pool of heterogeneous computing devices, including desktop computers, cloud resources, mobile devices, and graphical processing units. In this demonstration of the Tasklet system, we visualize the otherwise transparent process of computation offloading, starting from the development of an application until the actual distributed execution of tasks. While existing systems have high setup costs the Tasklet system emphasizes the ease of use and a seamless integration of various heterogeneous devices. In the demonstration, we focus on three key benefits of the Tasklet system. First, we demonstrate the usability of the system by live developing a distributed computing application in less than ten minutes. Second, we show how heterogeneous devices can be set up and join the resource pool during the execution of Tasklets. With a monitoring tool we visualize how the computational workload is split up among these resources. Third, we introduce the concept of quality of computation to tailor the otherwise generic computing framework to the requirements of individual applications.","PeriodicalId":125362,"journal":{"name":"Proceedings of the Posters and Demos Session of the 17th International Middleware Conference","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116884980","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}