首页 > 最新文献

Proceedings of the 12th ACM International Conference on Distributed and Event-based Systems最新文献

英文 中文
Venilia, On-line Learning and Prediction of Vessel Destination Venilia,船舶目的地在线学习与预测
Moti Bachar, Gal Elimelech, Itai Gat, Gil Sobol, Nicolo Rivetti, A. Gal
The ACM DEBS 2018 Grand Challenge focuses on (soft) real-time prediction of both the destination port and the time of arrival of vessels, monitored through the Automated Identification System (AIS). Venilia prediction mechanism is based on a variety of machine learning techniques, including Markov predictive models. To improve the accuracy of a model, trained off-line on historical data, Venilia supports also on-line continuous training using an incoming event stream. The software architecture enables a low latency, highly parallelized, and load balanced prediction pipeline. Aiming at a portable and reusable solution, Venilia is implemented on top of the Akka Actor framework. Finally, Venilia is also equipped with a visualization tool for data exploration.
ACM DEBS 2018大挑战侧重于通过自动识别系统(AIS)监测目的港和船舶到达时间的(软)实时预测。Venilia预测机制是基于多种机器学习技术,包括马尔可夫预测模型。为了提高离线历史数据训练模型的准确性,Venilia还支持使用传入事件流进行在线连续训练。该软件架构支持低延迟、高度并行化和负载均衡的预测管道。Venilia的目标是一个可移植和可重用的解决方案,它是在Akka Actor框架之上实现的。最后,Venilia还配备了用于数据探索的可视化工具。
{"title":"Venilia, On-line Learning and Prediction of Vessel Destination","authors":"Moti Bachar, Gal Elimelech, Itai Gat, Gil Sobol, Nicolo Rivetti, A. Gal","doi":"10.1145/3210284.3220505","DOIUrl":"https://doi.org/10.1145/3210284.3220505","url":null,"abstract":"The ACM DEBS 2018 Grand Challenge focuses on (soft) real-time prediction of both the destination port and the time of arrival of vessels, monitored through the Automated Identification System (AIS). Venilia prediction mechanism is based on a variety of machine learning techniques, including Markov predictive models. To improve the accuracy of a model, trained off-line on historical data, Venilia supports also on-line continuous training using an incoming event stream. The software architecture enables a low latency, highly parallelized, and load balanced prediction pipeline. Aiming at a portable and reusable solution, Venilia is implemented on top of the Akka Actor framework. Finally, Venilia is also equipped with a visualization tool for data exploration.","PeriodicalId":412438,"journal":{"name":"Proceedings of the 12th ACM International Conference on Distributed and Event-based Systems","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114309443","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Buffer Provisioning for Large-Scale Data-Acquisition Systems 大规模数据采集系统的缓冲区配置
Alejandro Santos, W. Vandelli, P. García, H. Fröning
The data acquisition system of the ATLAS experiment, a major experiment of the Large Hadron Collider (LHC) at CERN, will go through a major upgrade in the next decade. The upgrade is driven by experimental physics requirements, calling for increased data rates on the order of 6 TB/s. By contrast, the data rate of the existing system is 160 GB/s. Among the changes in the upgraded system will be a very large buffer with a projected size on the order of 70 PB. The buffer role will be decoupling of data production from on-line data processing, storing data for periods of up to 24 hours until it can be analyzed by the event processing system. The larger buffer will allow a new data recording strategy, providing additional margins to handle variable data rates. At the same time it will provide sensible trade-offs between buffering space and on-line processing capabilities. This compromise between two resources will be possible since the data production cycle includes time periods where the experiment will not produce data. In this paper we analyze the consequences of such trade-offs, and introduce a tool that allows a detailed exploration of different strategies for resource provisioning. It is based on a model of the upgraded data acquisition system, implemented in a simulation framework. From this model it is possible to obtain insight into the dynamics of the running system. Given predefined resource constraints, we provide bounds for the provisioning of buffering space and on-line processing requirements.
欧洲核子研究中心(CERN)大型强子对撞机(LHC)的主要实验——ATLAS实验的数据采集系统将在未来10年进行重大升级。这次升级是由实验物理需求驱动的,要求将数据速率提高到6tb /s。相比之下,现有系统的数据速率为160 GB/s。升级后的系统的变化之一将是一个非常大的缓冲区,预计大小约为70 PB。缓冲区的作用是将数据生成与在线数据处理分离,将数据存储长达24小时,直到事件处理系统可以对其进行分析。更大的缓冲区将允许新的数据记录策略,提供额外的空间来处理可变的数据速率。同时,它将在缓冲空间和在线处理能力之间提供合理的权衡。这两种资源之间的折衷是可能的,因为数据产生周期包括实验不会产生数据的时间段。在本文中,我们分析了这种权衡的后果,并介绍了一个工具,该工具允许详细探索资源供应的不同策略。它基于升级后的数据采集系统模型,在仿真框架中实现。从这个模型中,我们可以深入了解运行系统的动力学。给定预定义的资源约束,我们为缓冲空间的供应和在线处理需求提供了界限。
{"title":"Buffer Provisioning for Large-Scale Data-Acquisition Systems","authors":"Alejandro Santos, W. Vandelli, P. García, H. Fröning","doi":"10.1145/3210284.3210288","DOIUrl":"https://doi.org/10.1145/3210284.3210288","url":null,"abstract":"The data acquisition system of the ATLAS experiment, a major experiment of the Large Hadron Collider (LHC) at CERN, will go through a major upgrade in the next decade. The upgrade is driven by experimental physics requirements, calling for increased data rates on the order of 6 TB/s. By contrast, the data rate of the existing system is 160 GB/s. Among the changes in the upgraded system will be a very large buffer with a projected size on the order of 70 PB. The buffer role will be decoupling of data production from on-line data processing, storing data for periods of up to 24 hours until it can be analyzed by the event processing system. The larger buffer will allow a new data recording strategy, providing additional margins to handle variable data rates. At the same time it will provide sensible trade-offs between buffering space and on-line processing capabilities. This compromise between two resources will be possible since the data production cycle includes time periods where the experiment will not produce data. In this paper we analyze the consequences of such trade-offs, and introduce a tool that allows a detailed exploration of different strategies for resource provisioning. It is based on a model of the upgraded data acquisition system, implemented in a simulation framework. From this model it is possible to obtain insight into the dynamics of the running system. Given predefined resource constraints, we provide bounds for the provisioning of buffering space and on-line processing requirements.","PeriodicalId":412438,"journal":{"name":"Proceedings of the 12th ACM International Conference on Distributed and Event-based Systems","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130783001","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Platform for Choreography of Heterogeneous Healthcare Services 异构医疗服务编排平台
Wonjae Kim, Young Yoon
In this paper, we design a novel platform that facilitates integrated healthcare services without a centralized orchestration. Events that reflect dynamically changing conditions of patients are published using a scalable messaging middleware built on top of a publish/subscribe broker overlay network. Events matching service rules are routed to the appropriate caretakers. Services rules are issued autonomously by the caretakers who subscribe to the future matching events. Through this event-driven system, we aim to help the caretakers and medical staff to recommend and offer services to patients in a more timely and seamless manner.
在本文中,我们设计了一个新的平台,可以在没有集中编排的情况下促进集成医疗保健服务。使用构建在发布/订阅代理覆盖网络之上的可扩展消息传递中间件发布反映患者动态变化状况的事件。匹配服务规则的事件被路由到适当的看护人。服务规则由订阅未来匹配事件的看护人自主发布。通过这个事件驱动系统,我们的目标是帮助护理人员和医务人员更及时和无缝地向患者推荐和提供服务。
{"title":"A Platform for Choreography of Heterogeneous Healthcare Services","authors":"Wonjae Kim, Young Yoon","doi":"10.1145/3210284.3219771","DOIUrl":"https://doi.org/10.1145/3210284.3219771","url":null,"abstract":"In this paper, we design a novel platform that facilitates integrated healthcare services without a centralized orchestration. Events that reflect dynamically changing conditions of patients are published using a scalable messaging middleware built on top of a publish/subscribe broker overlay network. Events matching service rules are routed to the appropriate caretakers. Services rules are issued autonomously by the caretakers who subscribe to the future matching events. Through this event-driven system, we aim to help the caretakers and medical staff to recommend and offer services to patients in a more timely and seamless manner.","PeriodicalId":412438,"journal":{"name":"Proceedings of the 12th ACM International Conference on Distributed and Event-based Systems","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122448470","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Secret Sharing in Pub/Sub Using Trusted Execution Environments 基于可信执行环境的Pub/Sub秘密共享
Javier Munster, H. Jacobsen
An essential security concern in the publish/subscribe paradigm is that of guaranteeing the confidentiality of the data being transmitted. Existing solutions require that some initial parameters, keys or secrets be exchanged or otherwise established between communicating entities before secure end-to-end communication can occur. Most existing solutions in the literature either weaken the desirable decoupling properties of pub/sub or rely on a completely trusted out-of-band service to disseminate these values. This problem can be avoided through the use of Shamir's secret sharing scheme, at the cost of a prohibitively large number of messages, scaling exponentially with the path length between publisher and subscriber. Intel's Software Guard Extensions (SGX) offers trusted execution environments to shield application data from untrusted software running at a higher privilege level. Unfortunately, SGX requires the use of Intel's proprietary hardware and architecture. We mitigate these problems through HyShare, a hybrid broker network used for the purposes of sharing a secret between communicating publishers and subscribers. The broker network is composed of regular brokers that use Shamir's secret sharing scheme and brokers with SGX to reduce the overall number of messages needed to share a secret. By fine tuning the combination of these brokers, it is possible to strike a balance between network resource use and hardware heterogeneity.
发布/订阅范例中的一个基本安全问题是保证传输数据的机密性。现有的解决方案要求在通信实体之间交换或以其他方式建立一些初始参数、密钥或秘密,然后才能进行安全的端到端通信。文献中的大多数现有解决方案要么削弱了期望的pub/sub解耦特性,要么依赖于完全可信的带外服务来传播这些值。这个问题可以通过使用Shamir的秘密共享方案来避免,但代价是需要大量的消息,并且随着发布者和订阅者之间的路径长度呈指数级增长。英特尔的Software Guard Extensions (SGX)提供可信的执行环境,以保护应用程序数据不受运行在更高特权级别的不可信软件的攻击。不幸的是,SGX需要使用英特尔的专有硬件和架构。我们通过HyShare缓解了这些问题,HyShare是一个混合代理网络,用于在通信发布者和订阅者之间共享秘密。代理网络由使用Shamir的秘密共享方案的常规代理和使用SGX的代理组成,以减少共享秘密所需的消息总数。通过微调这些代理的组合,可以在网络资源使用和硬件异构性之间取得平衡。
{"title":"Secret Sharing in Pub/Sub Using Trusted Execution Environments","authors":"Javier Munster, H. Jacobsen","doi":"10.1145/3210284.3210290","DOIUrl":"https://doi.org/10.1145/3210284.3210290","url":null,"abstract":"An essential security concern in the publish/subscribe paradigm is that of guaranteeing the confidentiality of the data being transmitted. Existing solutions require that some initial parameters, keys or secrets be exchanged or otherwise established between communicating entities before secure end-to-end communication can occur. Most existing solutions in the literature either weaken the desirable decoupling properties of pub/sub or rely on a completely trusted out-of-band service to disseminate these values. This problem can be avoided through the use of Shamir's secret sharing scheme, at the cost of a prohibitively large number of messages, scaling exponentially with the path length between publisher and subscriber. Intel's Software Guard Extensions (SGX) offers trusted execution environments to shield application data from untrusted software running at a higher privilege level. Unfortunately, SGX requires the use of Intel's proprietary hardware and architecture. We mitigate these problems through HyShare, a hybrid broker network used for the purposes of sharing a secret between communicating publishers and subscribers. The broker network is composed of regular brokers that use Shamir's secret sharing scheme and brokers with SGX to reduce the overall number of messages needed to share a secret. By fine tuning the combination of these brokers, it is possible to strike a balance between network resource use and hardware heterogeneity.","PeriodicalId":412438,"journal":{"name":"Proceedings of the 12th ACM International Conference on Distributed and Event-based Systems","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133962902","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Log Pruning in Distributed Event-sourced Systems 分布式事件源系统中的日志修剪
Benjamin Erb, Dominik Meißner, Ferdinand Ogger, F. Kargl
Event sourcing is increasingly used and implemented in event-based systems for maintaining the evolution of application state. However, unbounded event logs are impracticable for many systems, as it is difficult to align scalability requirements and long-term runtime behavior with the corresponding storage requirements. To this end, we explore the design space of log pruning approaches suitable for event-sourced systems. Furthermore, we survey specific log pruning mechanisms for event-sourced logs. In a brief evaluation, we point out the trade-offs when applying pruning to event logs and highlight the applicability of log pruning to event-sourced systems.
事件溯源越来越多地在基于事件的系统中使用和实现,以维护应用程序状态的演变。然而,无界事件日志对于许多系统来说是不切实际的,因为很难将可伸缩性需求和长期运行时行为与相应的存储需求保持一致。为此,我们探索了适合事件源系统的日志修剪方法的设计空间。此外,我们还研究了事件源日志的特定日志修剪机制。在一个简短的评估中,我们指出了在对事件日志应用修剪时的权衡,并强调了日志修剪对事件源系统的适用性。
{"title":"Log Pruning in Distributed Event-sourced Systems","authors":"Benjamin Erb, Dominik Meißner, Ferdinand Ogger, F. Kargl","doi":"10.1145/3210284.3219767","DOIUrl":"https://doi.org/10.1145/3210284.3219767","url":null,"abstract":"Event sourcing is increasingly used and implemented in event-based systems for maintaining the evolution of application state. However, unbounded event logs are impracticable for many systems, as it is difficult to align scalability requirements and long-term runtime behavior with the corresponding storage requirements. To this end, we explore the design space of log pruning approaches suitable for event-sourced systems. Furthermore, we survey specific log pruning mechanisms for event-sourced logs. In a brief evaluation, we point out the trade-offs when applying pruning to event logs and highlight the applicability of log pruning to event-sourced systems.","PeriodicalId":412438,"journal":{"name":"Proceedings of the 12th ACM International Conference on Distributed and Event-based Systems","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124772740","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
FogStore
H. Gupta, U. Ramachandran
We design Fogstore, a key-value store for event-based systems, that exploits the concept of relevance to guarantee low-latency access to relevant data with strong consistency guarantees, while providing tolerance from geographically correlated failures. Distributed event-based processing pipelines are envisioned to utilize the resources of densely geo-distributed infrastructures for low-latency responses - enabling real-time applications. Increasing complexity of such applications results in higher dependence on state, which has driven the incorporation of state-management as a core functionality of contemporary stream processing engines a la Apache Flink and Samza. Processing components executing under the same context (like location) often produce information that may be relevant to others, thereby necessitating shared state and an out-of-band globally-accessible data-store. Efficient access to application state is critical for overall performance, thus centralized data-stores are not a viable option due to the high-latency of network traversals. On the other hand, a highly geo-distributed datastore with low-latency implemented with current key-value stores would necessitate degrading client expectation of consistency as per the PACELC theorem. In this paper we exploit the notion of contextual relevance of events (data) in situation-awareness applications - and offer differential consistency guarantees for clients based on their context. We highlight important systems concerns that may arise with a highly geo-distributed system and show how Fogstore's design tackles them. We present, in detail, a prototype implementation of Fogstore's mechanisms on Apache Cassandra and a performance evaluation. Our evaluations show that Fogstore is able to achieve the throughput of eventually consistent configurations while serving data with strong consistency to the contextually relevant clients.
{"title":"FogStore","authors":"H. Gupta, U. Ramachandran","doi":"10.1145/3210284.3210297","DOIUrl":"https://doi.org/10.1145/3210284.3210297","url":null,"abstract":"We design Fogstore, a key-value store for event-based systems, that exploits the concept of relevance to guarantee low-latency access to relevant data with strong consistency guarantees, while providing tolerance from geographically correlated failures. Distributed event-based processing pipelines are envisioned to utilize the resources of densely geo-distributed infrastructures for low-latency responses - enabling real-time applications. Increasing complexity of such applications results in higher dependence on state, which has driven the incorporation of state-management as a core functionality of contemporary stream processing engines a la Apache Flink and Samza. Processing components executing under the same context (like location) often produce information that may be relevant to others, thereby necessitating shared state and an out-of-band globally-accessible data-store. Efficient access to application state is critical for overall performance, thus centralized data-stores are not a viable option due to the high-latency of network traversals. On the other hand, a highly geo-distributed datastore with low-latency implemented with current key-value stores would necessitate degrading client expectation of consistency as per the PACELC theorem. In this paper we exploit the notion of contextual relevance of events (data) in situation-awareness applications - and offer differential consistency guarantees for clients based on their context. We highlight important systems concerns that may arise with a highly geo-distributed system and show how Fogstore's design tackles them. We present, in detail, a prototype implementation of Fogstore's mechanisms on Apache Cassandra and a performance evaluation. Our evaluations show that Fogstore is able to achieve the throughput of eventually consistent configurations while serving data with strong consistency to the contextually relevant clients.","PeriodicalId":412438,"journal":{"name":"Proceedings of the 12th ACM International Conference on Distributed and Event-based Systems","volume":"245 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121704186","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 43
New Challenges and Opportunities in Stream Processing: Transactions, Predictive Analytics, and Beyond: (Invited Keynote) 流处理的新挑战与机遇:交易、预测分析及其他:(特邀主题演讲)
Nesime Tatbul
EXTENDED ABSTRACT Stream processing has been an area of ongoing research since the early 2000s. Fueled by industry’s growing interest in dealing with high-velocity big data in near real-time settings, there has been a resurgence of recent activity in both research and engineering of large-scale stream processing systems. In this talk, we will examine the state of the art, focusing in particular on key trends of the past five years with an outlook towards the next five years. I will also give examples from our own work, including stream processing in transactional settings as well as predictive time series analytics for the Internet of Things. Transactional stream processing broadly refers to processing streaming data with correctness guarantees. These guarantees include not only properties that are intrinsic to stream processing (e.g., order, exactly-once semantics), but also ACID properties of traditional OLTP-oriented databases, which arise in streaming applications with shared mutable state. In our recent work, we have designed and built the S-Store System, a scalable main-memory system that supports hybrid OLTP+streaming workloads with strict correctness needs [5]. A use case that best exemplifies the strengths of S-Store is real-time data ingestion [4]. Thus, I will also discuss the requirements of modern data ingestion and how to meet them using S-Store, especially within the context of our BigDAWG Polystore System [1, 6].
自21世纪初以来,流处理一直是一个正在进行研究的领域。由于业界对在近实时环境下处理高速大数据的兴趣日益浓厚,最近大规模流处理系统的研究和工程活动又重新活跃起来。在这次演讲中,我们将研究技术的现状,特别关注过去五年的主要趋势,并展望未来五年。我还将从我们自己的工作中给出例子,包括事务设置中的流处理以及物联网的预测时间序列分析。事务性流处理广义上是指以正确性保证处理流数据。这些保证不仅包括流处理固有的属性(例如,顺序、一次语义),还包括传统的面向oltp数据库的ACID属性,这些属性出现在具有共享可变状态的流应用程序中。在我们最近的工作中,我们设计并构建了S-Store系统,这是一个可扩展的主内存系统,支持具有严格正确性需求的混合OLTP+流工作负载[5]。最能体现S-Store优势的用例是实时数据摄取[4]。因此,我还将讨论现代数据摄取的需求以及如何使用S-Store来满足这些需求,特别是在我们的BigDAWG Polystore系统的背景下[1,6]。
{"title":"New Challenges and Opportunities in Stream Processing: Transactions, Predictive Analytics, and Beyond: (Invited Keynote)","authors":"Nesime Tatbul","doi":"10.1145/3210284.3214706","DOIUrl":"https://doi.org/10.1145/3210284.3214706","url":null,"abstract":"EXTENDED ABSTRACT Stream processing has been an area of ongoing research since the early 2000s. Fueled by industry’s growing interest in dealing with high-velocity big data in near real-time settings, there has been a resurgence of recent activity in both research and engineering of large-scale stream processing systems. In this talk, we will examine the state of the art, focusing in particular on key trends of the past five years with an outlook towards the next five years. I will also give examples from our own work, including stream processing in transactional settings as well as predictive time series analytics for the Internet of Things. Transactional stream processing broadly refers to processing streaming data with correctness guarantees. These guarantees include not only properties that are intrinsic to stream processing (e.g., order, exactly-once semantics), but also ACID properties of traditional OLTP-oriented databases, which arise in streaming applications with shared mutable state. In our recent work, we have designed and built the S-Store System, a scalable main-memory system that supports hybrid OLTP+streaming workloads with strict correctness needs [5]. A use case that best exemplifies the strengths of S-Store is real-time data ingestion [4]. Thus, I will also discuss the requirements of modern data ingestion and how to meet them using S-Store, especially within the context of our BigDAWG Polystore System [1, 6].","PeriodicalId":412438,"journal":{"name":"Proceedings of the 12th ACM International Conference on Distributed and Event-based Systems","volume":"473 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133434680","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Moscato 麝香葡萄
Yongjun Choi, Young Yoon
This paper presents Moscato, a web-based tool for a more effective management of large-scale data and event processing platforms. With Moscato, composing data and event processing services can be done intuitively. The process of deploying new service instances including the task of installation and configuration can be automated. With such automation feature, we expect administrators tedious and error-prone management tasks are reduced. Instead, administrators can leverage Moscato's various novel visual cues in order to conduct multilateral situation analysis.
{"title":"Moscato","authors":"Yongjun Choi, Young Yoon","doi":"10.1145/3210284.3219772","DOIUrl":"https://doi.org/10.1145/3210284.3219772","url":null,"abstract":"This paper presents Moscato, a web-based tool for a more effective management of large-scale data and event processing platforms. With Moscato, composing data and event processing services can be done intuitively. The process of deploying new service instances including the task of installation and configuration can be automated. With such automation feature, we expect administrators tedious and error-prone management tasks are reduced. Instead, administrators can leverage Moscato's various novel visual cues in order to conduct multilateral situation analysis.","PeriodicalId":412438,"journal":{"name":"Proceedings of the 12th ACM International Conference on Distributed and Event-based Systems","volume":"160 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115265059","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Multimodal Complex Event Processing on Mobile Devices 移动设备上的多模式复杂事件处理
Pablo Graubner, Christoph Thelen, Michael Körber, Artur Sterz, G. Salvaneschi, M. Mezini, B. Seeger, Bernd Freisleben
Mobile devices are increasingly being used in edge and fog computing environments to process contextual data collected by sensors. Although complex event processing (CEP) is a suitable approach for realizing context-aware services on mobile devices in these environments, existing mobile CEP engines do not leverage the full potential of modern mobile hardware/software architectures. In this paper, we present multimodal CEP, a novel approach to process streams of events on-device in user space (user mode), in the operating system (kernel mode), on the Wi-Fi chip (Wi-Fi mode), and/or on a sensor hub (hub mode), providing significant improvements in terms of power consumption and throughput. Multimodal CEP automatically breaks up CEP queries and selects the most adequate execution mode for the involved CEP operators. Filter, aggregation, and correlation operators can be expressed in a high-level language without requiring system-level domain-specific knowledge. Multimodal CEP enables developers to efficiently detect user activities, collect environmental conditions, or interpret operating system and network events. Furthermore, it facilitates novel context-aware services, demonstrated by a use case for gathering and analyzing mobility data by Wi-Fi probe request tracking.
移动设备越来越多地用于边缘计算和雾计算环境,以处理传感器收集的上下文数据。尽管复杂事件处理(CEP)是在这些环境中在移动设备上实现上下文感知服务的合适方法,但现有的移动CEP引擎并没有充分利用现代移动硬件/软件架构的潜力。在本文中,我们提出了多模态CEP,这是一种在用户空间(用户模式)、操作系统(内核模式)、Wi-Fi芯片(Wi-Fi模式)和/或传感器集线器(集线器模式)上处理设备上事件流的新方法,在功耗和吞吐量方面提供了显着改进。多模式CEP自动分解CEP查询,并为所涉及的CEP操作符选择最合适的执行模式。过滤器、聚合和关联操作符可以用高级语言表示,而不需要系统级特定于领域的知识。多模式CEP使开发人员能够有效地检测用户活动、收集环境条件或解释操作系统和网络事件。此外,它还促进了新的上下文感知服务,通过Wi-Fi探测请求跟踪收集和分析移动数据的用例进行了演示。
{"title":"Multimodal Complex Event Processing on Mobile Devices","authors":"Pablo Graubner, Christoph Thelen, Michael Körber, Artur Sterz, G. Salvaneschi, M. Mezini, B. Seeger, Bernd Freisleben","doi":"10.1145/3210284.3210289","DOIUrl":"https://doi.org/10.1145/3210284.3210289","url":null,"abstract":"Mobile devices are increasingly being used in edge and fog computing environments to process contextual data collected by sensors. Although complex event processing (CEP) is a suitable approach for realizing context-aware services on mobile devices in these environments, existing mobile CEP engines do not leverage the full potential of modern mobile hardware/software architectures. In this paper, we present multimodal CEP, a novel approach to process streams of events on-device in user space (user mode), in the operating system (kernel mode), on the Wi-Fi chip (Wi-Fi mode), and/or on a sensor hub (hub mode), providing significant improvements in terms of power consumption and throughput. Multimodal CEP automatically breaks up CEP queries and selects the most adequate execution mode for the involved CEP operators. Filter, aggregation, and correlation operators can be expressed in a high-level language without requiring system-level domain-specific knowledge. Multimodal CEP enables developers to efficiently detect user activities, collect environmental conditions, or interpret operating system and network events. Furthermore, it facilitates novel context-aware services, demonstrated by a use case for gathering and analyzing mobility data by Wi-Fi probe request tracking.","PeriodicalId":412438,"journal":{"name":"Proceedings of the 12th ACM International Conference on Distributed and Event-based Systems","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122226928","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Vessel Destination and Arrival Time Prediction with Sequence-to-Sequence Models over Spatial Grid 空间网格上序列对序列模型的船舶目的地和到达时间预测
Duc-Duy Nguyen, Chan Le Van, M. Ali
We propose a sequence-to-sequence based method to predict vessels' destination port and estimated arrival time. We consider this problem as an extension of trajectory prediction problem, that takes a sequence of historical locations as input and returns a sequence of future locations, which is used to determine arrival port and estimated arrival time. Our solution first represents the trajectories on a spatial grid covering Mediterranean Sea. Then, we train a sequence-to-sequence model to predict the future movement of vessels based on movement tendency and current location. We built our solution using distributed architecture model and applied load balancing techniques to achieve both high performance and scalability.
我们提出了一种基于序列到序列的船舶目的港预测方法和预计到达时间。我们认为该问题是轨迹预测问题的扩展,以历史位置序列作为输入,返回未来位置序列,用于确定到达端口和估计到达时间。我们的解决方案首先在覆盖地中海的空间网格上表示轨迹。然后,我们训练了一个序列到序列的模型,根据运动趋势和当前位置预测血管的未来运动。我们使用分布式架构模型构建我们的解决方案,并应用负载平衡技术来实现高性能和可伸缩性。
{"title":"Vessel Destination and Arrival Time Prediction with Sequence-to-Sequence Models over Spatial Grid","authors":"Duc-Duy Nguyen, Chan Le Van, M. Ali","doi":"10.1145/3210284.3220507","DOIUrl":"https://doi.org/10.1145/3210284.3220507","url":null,"abstract":"We propose a sequence-to-sequence based method to predict vessels' destination port and estimated arrival time. We consider this problem as an extension of trajectory prediction problem, that takes a sequence of historical locations as input and returns a sequence of future locations, which is used to determine arrival port and estimated arrival time. Our solution first represents the trajectories on a spatial grid covering Mediterranean Sea. Then, we train a sequence-to-sequence model to predict the future movement of vessels based on movement tendency and current location. We built our solution using distributed architecture model and applied load balancing techniques to achieve both high performance and scalability.","PeriodicalId":412438,"journal":{"name":"Proceedings of the 12th ACM International Conference on Distributed and Event-based Systems","volume":"74 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131680402","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
期刊
Proceedings of the 12th ACM International Conference on Distributed and Event-based Systems
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1