首页 > 最新文献

2014 International Conference on High Performance Computing & Simulation (HPCS)最新文献

英文 中文
Exploiting distributed and shared memory hierarchies with Hitmap 利用Hitmap的分布式和共享内存层次结构
Pub Date : 2014-07-21 DOI: 10.1109/HPCSim.2014.6903696
Ana Moreton-Fernandez, Arturo González-Escribano, D. Ferraris
Current multicomputers are typically built as interconnected clusters of shared-memory multicore computers. A common programming approach for these clusters is to simply use a message-passing paradigm, launching as many processes as cores available. Nevertheless, to better exploit the scalability of these clusters and highly-parallel multicore systems, it is needed to efficiently use their distributed- and shared-memory hierarchies. This implies to combine different programming paradigms and tools at different levels of the program design. This paper presents an approach to ease the programming for mixed distributed and shared memory parallel computers. The coordination at the distributed memory level is simplified using Hitmap, a library for distributed computing using hierarchical tiling of data structures. We show how this tool can be integrated with shared-memory programming models and automatic code generation tools to efficiently exploit the multicore environment of each multicomputer node. This approach allows to exploit the most appropriate techniques for each model, easily generating multilevel parallel programs that automatically adapt their communication and synchronization structures to the target machine. Our experimental results show how this approach mimics or even improves the best performance results obtained with manually optimized codes using pure MPI or OpenMP models.
当前的多台计算机通常被构建为共享内存多核计算机的互连集群。这些集群的一种常见编程方法是简单地使用消息传递范式,启动尽可能多的进程。然而,为了更好地利用这些集群和高度并行的多核系统的可伸缩性,需要有效地利用它们的分布式和共享内存层次结构。这意味着在程序设计的不同层次上结合不同的编程范例和工具。本文提出了一种简化分布式和共享内存混合并行计算机编程的方法。使用Hitmap简化了分布式内存级的协调,Hitmap是一个使用数据结构分层平铺的分布式计算库。我们展示了该工具如何与共享内存编程模型和自动代码生成工具集成,以有效地利用每个多计算机节点的多核环境。这种方法允许为每个模型开发最合适的技术,轻松地生成多层并行程序,自动调整其通信和同步结构以适应目标机器。我们的实验结果表明,这种方法如何模仿甚至提高使用纯MPI或OpenMP模型手动优化代码获得的最佳性能结果。
{"title":"Exploiting distributed and shared memory hierarchies with Hitmap","authors":"Ana Moreton-Fernandez, Arturo González-Escribano, D. Ferraris","doi":"10.1109/HPCSim.2014.6903696","DOIUrl":"https://doi.org/10.1109/HPCSim.2014.6903696","url":null,"abstract":"Current multicomputers are typically built as interconnected clusters of shared-memory multicore computers. A common programming approach for these clusters is to simply use a message-passing paradigm, launching as many processes as cores available. Nevertheless, to better exploit the scalability of these clusters and highly-parallel multicore systems, it is needed to efficiently use their distributed- and shared-memory hierarchies. This implies to combine different programming paradigms and tools at different levels of the program design. This paper presents an approach to ease the programming for mixed distributed and shared memory parallel computers. The coordination at the distributed memory level is simplified using Hitmap, a library for distributed computing using hierarchical tiling of data structures. We show how this tool can be integrated with shared-memory programming models and automatic code generation tools to efficiently exploit the multicore environment of each multicomputer node. This approach allows to exploit the most appropriate techniques for each model, easily generating multilevel parallel programs that automatically adapt their communication and synchronization structures to the target machine. Our experimental results show how this approach mimics or even improves the best performance results obtained with manually optimized codes using pure MPI or OpenMP models.","PeriodicalId":6469,"journal":{"name":"2014 International Conference on High Performance Computing & Simulation (HPCS)","volume":"83 1","pages":"278-286"},"PeriodicalIF":0.0,"publicationDate":"2014-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88604536","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Security and privacy of location-based services for in-vehicle device systems 车载设备系统定位服务的安全性和隐私性
Pub Date : 2014-07-21 DOI: 10.1109/HPCSim.2014.6903777
Marcello Missiroli, Fabio Pierazzi, M. Colajanni
Location-based services relying on in-vehicle devices are becoming so common that it is likely that, in the near future, devices of some sorts will be installed on new vehicles by default. The pressure for a rapid adoption of these devices and services is not yet counterbalanced by an adequate awareness about system security and data privacy issues. For example, service providers might collect, elaborate and sell data belonging to cars, drivers and locations to a plethora of organizations that may be interested in acquiring such personal information. We propose a comprehensive scenario describing the entire process of data gathering, management and transmission related to in-vehicle devices, and for each phase we point out the most critical security and privacy threats. By referring to this scenario, we can outline issues and challenges that should be addressed by the academic and industry communities for a correct adoption of in-vehicle devices and related services.
依赖车载设备的基于位置的服务变得如此普遍,以至于在不久的将来,某些类型的设备很可能会默认安装在新车上。快速采用这些设备和服务的压力还没有被对系统安全和数据隐私问题的充分认识所抵消。例如,服务提供商可能会收集、整理和出售属于汽车、司机和位置的数据给大量可能对获取此类个人信息感兴趣的组织。我们提出了一个全面的场景,描述了与车载设备相关的数据收集、管理和传输的整个过程,并在每个阶段指出了最关键的安全和隐私威胁。通过参考这一场景,我们可以概述学术界和工业界应该解决的问题和挑战,以便正确采用车载设备和相关服务。
{"title":"Security and privacy of location-based services for in-vehicle device systems","authors":"Marcello Missiroli, Fabio Pierazzi, M. Colajanni","doi":"10.1109/HPCSim.2014.6903777","DOIUrl":"https://doi.org/10.1109/HPCSim.2014.6903777","url":null,"abstract":"Location-based services relying on in-vehicle devices are becoming so common that it is likely that, in the near future, devices of some sorts will be installed on new vehicles by default. The pressure for a rapid adoption of these devices and services is not yet counterbalanced by an adequate awareness about system security and data privacy issues. For example, service providers might collect, elaborate and sell data belonging to cars, drivers and locations to a plethora of organizations that may be interested in acquiring such personal information. We propose a comprehensive scenario describing the entire process of data gathering, management and transmission related to in-vehicle devices, and for each phase we point out the most critical security and privacy threats. By referring to this scenario, we can outline issues and challenges that should be addressed by the academic and industry communities for a correct adoption of in-vehicle devices and related services.","PeriodicalId":6469,"journal":{"name":"2014 International Conference on High Performance Computing & Simulation (HPCS)","volume":"14 1","pages":"841-848"},"PeriodicalIF":0.0,"publicationDate":"2014-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81835965","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Ophidia: A full software stack for scientific data analytics Ophidia:用于科学数据分析的完整软件堆栈
Pub Date : 2014-07-21 DOI: 10.1109/HPCSim.2014.6903706
S. Fiore, Alessandro D'Anca, D. Elia, Cosimo Palazzo, Dean N. Williams, Ian T Foster, G. Aloisio
The Ophidia project aims to provide a big data analytics platform solution that addresses scientific use cases related to large volumes of multidimensional data. In this work, the Ophidia software infrastructure is discussed in detail, presenting the entire software stack from level-0 (the Ophidia data store) to level-3 (the Ophidia web service front end). In particular, this paper presents the big data cube primitives provided by the Ophidia framework, discussing in detail the most relevant and available data cube manipulation operators. These primitives represent the proper foundations to build more complex data cube operators like the apex one presented in this paper. A massive data reduction experiment on a 1TB climate dataset is also presented to demonstrate the apex workflow in the context of the proposed framework.
Ophidia项目旨在提供一个大数据分析平台解决方案,解决与大量多维数据相关的科学用例。在这项工作中,详细讨论了Ophidia软件基础设施,展示了从0级(Ophidia数据存储)到3级(Ophidia web服务前端)的整个软件堆栈。特别地,本文介绍了由Ophidia框架提供的大数据立方体原语,详细讨论了最相关和可用的数据立方体操作算子。这些原语为构建更复杂的数据多维数据集操作符(如本文中介绍的顶点操作符)提供了适当的基础。本文还在一个1TB气候数据集上进行了大规模数据约简实验,以验证该框架下的顶点工作流。
{"title":"Ophidia: A full software stack for scientific data analytics","authors":"S. Fiore, Alessandro D'Anca, D. Elia, Cosimo Palazzo, Dean N. Williams, Ian T Foster, G. Aloisio","doi":"10.1109/HPCSim.2014.6903706","DOIUrl":"https://doi.org/10.1109/HPCSim.2014.6903706","url":null,"abstract":"The Ophidia project aims to provide a big data analytics platform solution that addresses scientific use cases related to large volumes of multidimensional data. In this work, the Ophidia software infrastructure is discussed in detail, presenting the entire software stack from level-0 (the Ophidia data store) to level-3 (the Ophidia web service front end). In particular, this paper presents the big data cube primitives provided by the Ophidia framework, discussing in detail the most relevant and available data cube manipulation operators. These primitives represent the proper foundations to build more complex data cube operators like the apex one presented in this paper. A massive data reduction experiment on a 1TB climate dataset is also presented to demonstrate the apex workflow in the context of the proposed framework.","PeriodicalId":6469,"journal":{"name":"2014 International Conference on High Performance Computing & Simulation (HPCS)","volume":"30 1","pages":"343-350"},"PeriodicalIF":0.0,"publicationDate":"2014-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84606883","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Personalized management of semantic, dynamic data in pervasive systems: Context-ADDICT revisited 普适系统中语义、动态数据的个性化管理:Context-ADDICT重访
Pub Date : 2014-07-21 DOI: 10.1109/HPCSim.2014.6903703
Emanuele Panigati
Due to the high information load to which everyone is exposed in her everyday life, the rise of new, systems fully supporting pervasive information distribution, analysis and sharing becomes a key factor to allow a correct and useful interaction among humans and computer systems. This kind of systems must allow to manage, integrate, analyze, and possibly reason on, a large and heterogeneous set of data. The SuNDroPS system, briefly described in this paper, applies context-aware techniques to data gathering, shared services, and information distribution; the system is based on a context-aware approach that, applied to these tasks, leads to the reduction of the so-called information noise, delivering to the users only the portion of information that is useful in their current context.
由于每个人在日常生活中都面临着巨大的信息负荷,因此,充分支持无处不在的信息分发、分析和共享的新系统的兴起,成为允许人与计算机系统之间进行正确和有用交互的关键因素。这种类型的系统必须允许管理、集成、分析和可能的推理大型异构数据集。本文简要介绍了SuNDroPS系统,该系统将上下文感知技术应用于数据收集、共享服务和信息分发;该系统基于上下文感知方法,将其应用于这些任务,可以减少所谓的信息噪声,只向用户提供在其当前上下文中有用的部分信息。
{"title":"Personalized management of semantic, dynamic data in pervasive systems: Context-ADDICT revisited","authors":"Emanuele Panigati","doi":"10.1109/HPCSim.2014.6903703","DOIUrl":"https://doi.org/10.1109/HPCSim.2014.6903703","url":null,"abstract":"Due to the high information load to which everyone is exposed in her everyday life, the rise of new, systems fully supporting pervasive information distribution, analysis and sharing becomes a key factor to allow a correct and useful interaction among humans and computer systems. This kind of systems must allow to manage, integrate, analyze, and possibly reason on, a large and heterogeneous set of data. The SuNDroPS system, briefly described in this paper, applies context-aware techniques to data gathering, shared services, and information distribution; the system is based on a context-aware approach that, applied to these tasks, leads to the reduction of the so-called information noise, delivering to the users only the portion of information that is useful in their current context.","PeriodicalId":6469,"journal":{"name":"2014 International Conference on High Performance Computing & Simulation (HPCS)","volume":"4 1","pages":"323-326"},"PeriodicalIF":0.0,"publicationDate":"2014-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84629804","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
DWPE, a new data center energy-efficiency metric bridging the gap between infrastructure and workload DWPE是一种新的数据中心能源效率指标,它弥合了基础设施和工作负载之间的差距
Pub Date : 2014-07-21 DOI: 10.1109/HPCSim.2014.6903784
T. Wilde, A. Auweter, M. Patterson, H. Shoukourian, Herbert Huber, A. Bode, D. Labrenz, C. Cavazzoni
To determine whether a High-Performance Computing (HPC) data center is energy efficient, various aspects have to be taken into account: the data center's power distribution and cooling infrastructure, the HPC system itself, the influence of the system management software, and the HPC workloads; all can contribute to the overall energy efficiency of the data center. Currently, two well-established metrics are used to determine energy efficiency for HPC data centers and systems: Power Usage Effectiveness (PUE) and FLOPS per Watt (as defined by the Green500 in their ranking list). PUE evaluates the overhead for running a data center and FLOPS per Watt characterizes the energy efficiency of a system running the High-Performance Linpack (HPL) benchmark, i.e. floating point operations per second achieved with 1 watt of electrical power. Unfortunately, under closer examination even the combination of both metrics does not characterize the overall energy efficiency of a HPC data center. First, HPL does not constitute a representative workload for most of today's HPC applications and the rev 0.9 Green500 run rules for power measurements allows for excluding subsystems (e.g. networking, storage, cooling). Second, even a combination of PUE with FLOPS per Watt metric neglects that the total energy efficiency of a system can vary with the characteristics of the data center in which it is operated. This is due to different cooling technologies implemented in HPC systems and the difference in costs incurred by the data center removing the heat using these technologies. To address these issues, this paper introduces the metrics system PUE (sPUE) and Data center Workload Power Efficiency (DWPE). sPUE calculates the overhead for operating a given system in a certain data center. DWPE is then calculated by determining the energy efficiency of a specific workload and dividing it by the sPUE. DWPE can then be used to define the energy efficiency of running a given workload on a specific HPC system in a specific data center and is currently the only fully-integrated metric suitable for rating an HPC data center's energy efficiency. In addition, DWPE allows for predicting the energy efficiency of different HPC systems in existing HPC data centers, thus making it an ideal approach for guiding HPC system procurement. This paper concludes with a demonstration of the application of DWPE using a set of representative HPC workloads.
要确定高性能计算(HPC)数据中心是否节能,必须考虑多个方面:数据中心的配电和冷却基础设施、HPC系统本身、系统管理软件的影响以及HPC工作负载;所有这些都有助于提高数据中心的整体能源效率。目前,有两个公认的指标用于确定HPC数据中心和系统的能源效率:功率使用效率(PUE)和每瓦FLOPS(由Green500在其排名列表中定义)。PUE评估运行数据中心的开销,FLOPS / Watt表示运行高性能Linpack (HPL)基准的系统的能源效率,即使用1瓦电力实现的每秒浮点操作数。不幸的是,经过更仔细的检查,即使这两个指标的组合也不能表征HPC数据中心的整体能源效率。首先,HPL并不构成当今大多数HPC应用程序的代表性工作负载,并且rev 0.9 Green500运行规则的功率测量允许排除子系统(例如网络,存储,冷却)。其次,即使将PUE与FLOPS / Watt度量相结合,也忽略了系统的总能源效率可能随着其运行的数据中心的特征而变化。这是由于在高性能计算系统中采用了不同的冷却技术,以及数据中心使用这些技术去除热量所产生的成本差异。为了解决这些问题,本文介绍了度量系统PUE (sPUE)和数据中心工作负载功率效率(DWPE)。sPUE计算在某个数据中心中操作给定系统的开销。然后通过确定特定工作负载的能源效率并将其除以sPUE来计算DWPE。然后,DWPE可用于定义在特定数据中心的特定HPC系统上运行给定工作负载的能源效率,并且是目前唯一适合对HPC数据中心的能源效率进行评级的完全集成的度量。此外,DWPE允许预测现有HPC数据中心中不同HPC系统的能源效率,从而使其成为指导HPC系统采购的理想方法。本文最后用一组具有代表性的HPC工作负载演示了DWPE的应用。
{"title":"DWPE, a new data center energy-efficiency metric bridging the gap between infrastructure and workload","authors":"T. Wilde, A. Auweter, M. Patterson, H. Shoukourian, Herbert Huber, A. Bode, D. Labrenz, C. Cavazzoni","doi":"10.1109/HPCSim.2014.6903784","DOIUrl":"https://doi.org/10.1109/HPCSim.2014.6903784","url":null,"abstract":"To determine whether a High-Performance Computing (HPC) data center is energy efficient, various aspects have to be taken into account: the data center's power distribution and cooling infrastructure, the HPC system itself, the influence of the system management software, and the HPC workloads; all can contribute to the overall energy efficiency of the data center. Currently, two well-established metrics are used to determine energy efficiency for HPC data centers and systems: Power Usage Effectiveness (PUE) and FLOPS per Watt (as defined by the Green500 in their ranking list). PUE evaluates the overhead for running a data center and FLOPS per Watt characterizes the energy efficiency of a system running the High-Performance Linpack (HPL) benchmark, i.e. floating point operations per second achieved with 1 watt of electrical power. Unfortunately, under closer examination even the combination of both metrics does not characterize the overall energy efficiency of a HPC data center. First, HPL does not constitute a representative workload for most of today's HPC applications and the rev 0.9 Green500 run rules for power measurements allows for excluding subsystems (e.g. networking, storage, cooling). Second, even a combination of PUE with FLOPS per Watt metric neglects that the total energy efficiency of a system can vary with the characteristics of the data center in which it is operated. This is due to different cooling technologies implemented in HPC systems and the difference in costs incurred by the data center removing the heat using these technologies. To address these issues, this paper introduces the metrics system PUE (sPUE) and Data center Workload Power Efficiency (DWPE). sPUE calculates the overhead for operating a given system in a certain data center. DWPE is then calculated by determining the energy efficiency of a specific workload and dividing it by the sPUE. DWPE can then be used to define the energy efficiency of running a given workload on a specific HPC system in a specific data center and is currently the only fully-integrated metric suitable for rating an HPC data center's energy efficiency. In addition, DWPE allows for predicting the energy efficiency of different HPC systems in existing HPC data centers, thus making it an ideal approach for guiding HPC system procurement. This paper concludes with a demonstration of the application of DWPE using a set of representative HPC workloads.","PeriodicalId":6469,"journal":{"name":"2014 International Conference on High Performance Computing & Simulation (HPCS)","volume":"1 1","pages":"893-901"},"PeriodicalIF":0.0,"publicationDate":"2014-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79841866","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 20
A HPC infrastructure for processing and visualizing neuro-anatomical images obtained by Confocal Light Sheet Microscopy 用于处理和可视化共聚焦光片显微镜获得的神经解剖图像的HPC基础设施
Pub Date : 2014-07-21 DOI: 10.1109/HPCSim.2014.6903741
A. Bria, G. Iannello, P. Soda, Hanchuan Peng, G. Erbacci, G. Fiameni, Giacomo Mariani, R. Mucci, M. Rorro, F. Pavone, L. Silvestri, P. Frasconi, Roberto Cortini
Scientific problems dealing with the processing of large amounts of data require efforts in the integration of proper services and applications to facilitate the research activity, interacting with high performance computing resources. Easier access to these resources have a profound impact on research in neuroscience, leading to advances in the management and processing of neuro-anatomical images. An ever increasing amount of data are constantly collected with a consequent demand of top-class computational resources to process them. In this paper, a HPC infrastructure for the management and the processing of neuro-anatomical images is presented, introducing the effort made to optimize and integrate specific applications in order to fully exploit the available resources.
处理大量数据处理的科学问题需要努力集成适当的服务和应用程序,以促进研究活动,并与高性能计算资源进行交互。更容易获得这些资源对神经科学研究产生了深远的影响,导致了神经解剖图像管理和处理的进步。不断收集的数据量不断增加,因此需要一流的计算资源来处理它们。本文提出了一种用于神经解剖图像管理和处理的高性能计算基础设施,介绍了优化和集成特定应用程序的努力,以充分利用可用资源。
{"title":"A HPC infrastructure for processing and visualizing neuro-anatomical images obtained by Confocal Light Sheet Microscopy","authors":"A. Bria, G. Iannello, P. Soda, Hanchuan Peng, G. Erbacci, G. Fiameni, Giacomo Mariani, R. Mucci, M. Rorro, F. Pavone, L. Silvestri, P. Frasconi, Roberto Cortini","doi":"10.1109/HPCSim.2014.6903741","DOIUrl":"https://doi.org/10.1109/HPCSim.2014.6903741","url":null,"abstract":"Scientific problems dealing with the processing of large amounts of data require efforts in the integration of proper services and applications to facilitate the research activity, interacting with high performance computing resources. Easier access to these resources have a profound impact on research in neuroscience, leading to advances in the management and processing of neuro-anatomical images. An ever increasing amount of data are constantly collected with a consequent demand of top-class computational resources to process them. In this paper, a HPC infrastructure for the management and the processing of neuro-anatomical images is presented, introducing the effort made to optimize and integrate specific applications in order to fully exploit the available resources.","PeriodicalId":6469,"journal":{"name":"2014 International Conference on High Performance Computing & Simulation (HPCS)","volume":"7 1","pages":"592-599"},"PeriodicalIF":0.0,"publicationDate":"2014-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83601573","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Analysing Hadoop performance in a multi-user IaaS Cloud 在多用户IaaS云中分析Hadoop性能
Pub Date : 2014-07-21 DOI: 10.1109/HPCSIM.2014.6903713
Javier Conejero, María Blanca Caminero, C. Carrión
Over the last few years, Big Data analysis (i.e., crunching enormous amounts of data from different sources to extract useful knowledge for improving business objectives) has attracted huge attention from enterprises and research institutions. One of the most successful paradigms that has gained popularity in order to analyse this huge amount of data, is MapReduce (and particularly Hadoop, its open source implementation). However, Hadoop-based applications require massive amounts of resources in order to conduct different analysis of large amounts of data. This growing requirements that research and enterprises demand from the actual computing infrastructures empowers the Cloud computing utilization, where there is an increasing demand of Hadoop as a Service. Since Hadoop requires a distributed environment in order to operate, a significant problem is where resources are located. Focusing in Cloud environments, this problem lays mainly on the criteria for Virtual Machine (VM) placement. The work presented in this paper focuses on the analysis of performance, power consumption and resource usage by Hadoop applications when deploying Hadoop on Virtual Clusters (VCs) within a private IaaS Cloud. More precisely, the impact of different VM placement strategies on Hadoop-based application performance, power consumption and resource usage is measured. As a result, some conclusions on the optimal criteria for VM deployment are provided.
在过去的几年里,大数据分析(即从不同来源的大量数据中提取有用的知识,以提高业务目标)引起了企业和研究机构的极大关注。在分析海量数据方面,最成功的范例之一是MapReduce(尤其是它的开源实现Hadoop)。然而,基于hadoop的应用程序需要大量的资源,以便对大量数据进行不同的分析。研究和企业对实际计算基础设施的需求不断增长,这为云计算的利用提供了动力,而对Hadoop即服务的需求也在不断增长。由于Hadoop需要一个分布式环境来运行,一个重要的问题是资源的位置。在云环境中,这个问题主要在于虚拟机(VM)放置的标准。本文的工作重点是分析在私有IaaS云中的虚拟集群(VCs)上部署Hadoop时,Hadoop应用程序的性能、功耗和资源使用情况。更准确地说,测量了不同的VM放置策略对基于hadoop的应用程序性能、功耗和资源使用的影响。最后,给出了一些关于虚拟机部署的最佳标准的结论。
{"title":"Analysing Hadoop performance in a multi-user IaaS Cloud","authors":"Javier Conejero, María Blanca Caminero, C. Carrión","doi":"10.1109/HPCSIM.2014.6903713","DOIUrl":"https://doi.org/10.1109/HPCSIM.2014.6903713","url":null,"abstract":"Over the last few years, Big Data analysis (i.e., crunching enormous amounts of data from different sources to extract useful knowledge for improving business objectives) has attracted huge attention from enterprises and research institutions. One of the most successful paradigms that has gained popularity in order to analyse this huge amount of data, is MapReduce (and particularly Hadoop, its open source implementation). However, Hadoop-based applications require massive amounts of resources in order to conduct different analysis of large amounts of data. This growing requirements that research and enterprises demand from the actual computing infrastructures empowers the Cloud computing utilization, where there is an increasing demand of Hadoop as a Service. Since Hadoop requires a distributed environment in order to operate, a significant problem is where resources are located. Focusing in Cloud environments, this problem lays mainly on the criteria for Virtual Machine (VM) placement. The work presented in this paper focuses on the analysis of performance, power consumption and resource usage by Hadoop applications when deploying Hadoop on Virtual Clusters (VCs) within a private IaaS Cloud. More precisely, the impact of different VM placement strategies on Hadoop-based application performance, power consumption and resource usage is measured. As a result, some conclusions on the optimal criteria for VM deployment are provided.","PeriodicalId":6469,"journal":{"name":"2014 International Conference on High Performance Computing & Simulation (HPCS)","volume":"6 1","pages":"399-406"},"PeriodicalIF":0.0,"publicationDate":"2014-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90071729","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Supercomputer simulations of platelet activation in blood plasma at multiple scales 多尺度血浆中血小板活化的超级计算机模拟
Pub Date : 2014-07-21 DOI: 10.1109/HPCSim.2014.6903802
Seetha Pothapragada
Thrombogenicity in cardiovascular devices and pathologies is associated with flow-induced shear stress activation of platelets resulting from pathological flow patterns. This platelet activation process poses a major modeling challenge as it covers disparate spatiotemporal scales, from flow down to cellular, to subcellular, and to molecular scales. This challenge can be resolved by implementing multiscale simulations feasible only on supercomputers. The simulation must couple the macroscopic effects of blood plasma flow and stresses to a microscopic platelet dynamics. In an attempt to model this complex and multiscale behavior we have first developed a phenomenological three-dimensional coarse-grained molecular dynamics (CGMD) particle-based model. This model depicts resting platelets and simulates their characteristic filopodia formation observed during activation. Simulations results are compared with in vitro measurements of activated platelet morphological changes, such as the core axes and filopodia thicknesses and lengths, after exposure to the prescribed flow-induced shear stresses. More recently, we extended this model by incorporating the platelet in Dissipative Particle Dynamics (DPD) blood plasma flow and developed a dynamic coupling scheme that allows the simulation of flow-induced shear stress platelet activation. This portion of research is in progress.
心血管装置和病理中的血栓形成性与病理性血流模式引起的血流诱导的血小板剪切应力激活有关。这个血小板激活过程提出了一个主要的建模挑战,因为它涵盖了不同的时空尺度,从流动到细胞,到亚细胞,再到分子尺度。这一挑战可以通过在超级计算机上实现多尺度模拟来解决。模拟必须将血浆流动和应力的宏观影响与微观血小板动力学相结合。为了模拟这种复杂的多尺度行为,我们首先建立了一个基于粒子的现象学三维粗粒度分子动力学(CGMD)模型。该模型描述了静止血小板,并模拟了活化过程中观察到的丝状足形成的特征。模拟结果与体外活化血小板形态变化的测量结果进行了比较,如核心轴和丝状伪足的厚度和长度,暴露于规定的流动诱导剪切应力后。最近,我们通过将血小板纳入耗散粒子动力学(DPD)血浆流动扩展了该模型,并开发了一种动态耦合方案,可以模拟血流诱导的剪切应力血小板激活。这部分研究正在进行中。
{"title":"Supercomputer simulations of platelet activation in blood plasma at multiple scales","authors":"Seetha Pothapragada","doi":"10.1109/HPCSim.2014.6903802","DOIUrl":"https://doi.org/10.1109/HPCSim.2014.6903802","url":null,"abstract":"Thrombogenicity in cardiovascular devices and pathologies is associated with flow-induced shear stress activation of platelets resulting from pathological flow patterns. This platelet activation process poses a major modeling challenge as it covers disparate spatiotemporal scales, from flow down to cellular, to subcellular, and to molecular scales. This challenge can be resolved by implementing multiscale simulations feasible only on supercomputers. The simulation must couple the macroscopic effects of blood plasma flow and stresses to a microscopic platelet dynamics. In an attempt to model this complex and multiscale behavior we have first developed a phenomenological three-dimensional coarse-grained molecular dynamics (CGMD) particle-based model. This model depicts resting platelets and simulates their characteristic filopodia formation observed during activation. Simulations results are compared with in vitro measurements of activated platelet morphological changes, such as the core axes and filopodia thicknesses and lengths, after exposure to the prescribed flow-induced shear stresses. More recently, we extended this model by incorporating the platelet in Dissipative Particle Dynamics (DPD) blood plasma flow and developed a dynamic coupling scheme that allows the simulation of flow-induced shear stress platelet activation. This portion of research is in progress.","PeriodicalId":6469,"journal":{"name":"2014 International Conference on High Performance Computing & Simulation (HPCS)","volume":"60 1","pages":"1011-1013"},"PeriodicalIF":0.0,"publicationDate":"2014-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72731147","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Evaluation of Intel Xeon E5-2600v2 based cluster for technical computing workloads 基于Intel至强E5-2600v2的集群技术计算工作负载评估
Pub Date : 2014-07-21 DOI: 10.1109/HPCSim.2014.6903787
P. Gepner, V. Gamayunov, Wieslawa Litke, L. Sauge, C. Mazauric
In Intel's CPU releasing model, the new Ivy Bridge is a “TICK” that follows Sandy Bridge's (“TOCK”) microarchitecture principles, however, after undergoing a die shrink it is manufactured at 22nm. It also incorporates new micro-architectural upgrades. In this paper we shall evaluate the performance of a 16 bi-socket node cluster based on this 3rd generation Intel Xeon Processor E5-2697v2 meant for server and workstation market. The new architectural improvements are assessed via High Performance Computing Challenge (HPCC) benchmarks and NAS Parallel Benchmarks (NPB) where the interconnect technology is challenged by the standard Intel® MPI Benchmark suite performance evaluator. Finally we tested performance of the new system using the subset of the benchmark from PRACE consortium. We compare achieved results against the outcomes of the tests performed on clusters based on previous generations of Intel Xeon processors: Intel Xeon E5-2680 (“Sandy Bridge-EP”), Intel Xeon 5680 (“Westmere-EP”) and Intel Xeon 5570 (“Nehalem-EP”) respectively.
在英特尔发布的CPU模型中,新的Ivy Bridge是一个“TICK”,遵循Sandy Bridge(“TOCK”)的微架构原则,然而,在经历了芯片缩小之后,它是在22nm制造的。它还包含了新的微架构升级。在本文中,我们将评估基于第三代英特尔至强处理器E5-2697v2的16双插座节点集群的性能,该处理器适用于服务器和工作站市场。新的架构改进通过高性能计算挑战(HPCC)基准测试和NAS并行基准测试(NPB)进行评估,其中互连技术受到标准英特尔®MPI基准套件性能评估器的挑战。最后,我们使用来自PRACE联盟的基准子集测试了新系统的性能。我们将取得的结果与基于前几代英特尔至强处理器的集群上执行的测试结果进行了比较:分别是英特尔至强E5-2680(“Sandy Bridge-EP”)、英特尔至强5680(“Westmere-EP”)和英特尔至强5570(“Nehalem-EP”)。
{"title":"Evaluation of Intel Xeon E5-2600v2 based cluster for technical computing workloads","authors":"P. Gepner, V. Gamayunov, Wieslawa Litke, L. Sauge, C. Mazauric","doi":"10.1109/HPCSim.2014.6903787","DOIUrl":"https://doi.org/10.1109/HPCSim.2014.6903787","url":null,"abstract":"In Intel's CPU releasing model, the new Ivy Bridge is a “TICK” that follows Sandy Bridge's (“TOCK”) microarchitecture principles, however, after undergoing a die shrink it is manufactured at 22nm. It also incorporates new micro-architectural upgrades. In this paper we shall evaluate the performance of a 16 bi-socket node cluster based on this 3rd generation Intel Xeon Processor E5-2697v2 meant for server and workstation market. The new architectural improvements are assessed via High Performance Computing Challenge (HPCC) benchmarks and NAS Parallel Benchmarks (NPB) where the interconnect technology is challenged by the standard Intel® MPI Benchmark suite performance evaluator. Finally we tested performance of the new system using the subset of the benchmark from PRACE consortium. We compare achieved results against the outcomes of the tests performed on clusters based on previous generations of Intel Xeon processors: Intel Xeon E5-2680 (“Sandy Bridge-EP”), Intel Xeon 5680 (“Westmere-EP”) and Intel Xeon 5570 (“Nehalem-EP”) respectively.","PeriodicalId":6469,"journal":{"name":"2014 International Conference on High Performance Computing & Simulation (HPCS)","volume":"5 1","pages":"919-926"},"PeriodicalIF":0.0,"publicationDate":"2014-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79609984","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
A GPU accelerated hybrid lattice-grid algorithm for options pricing 一种GPU加速的混合格网期权定价算法
Pub Date : 2014-07-21 DOI: 10.1109/HPCSim.2014.6903765
Joan O. Omeru, David B. Thomas
The pricing of financial derivatives is an important problem in risk analysis and real-time trading. The need for faster and more accurate pricing has led financial institutions to adopt GPU technology, but this means we need new pricing algorithms designed specifically for GPU architectures. This research tackles the design of adaptable algorithms for option evaluation using lattices, a commonly used numerical technique. Usually lattice nodes are placed on a fixed grid at a high resolution, but by coarsening the grid in areas of low error, we can reduce run-time without a reduction in accuracy. We show that this adaptable grid can be designed to map onto the underlying architecture of warp-based GPUs, providing a tradeoff between faster execution at the same error, or lower error for the same execution speed. We implemented this algorithm in platform-independent OpenCL, and evaluated it on the Nvidia Quadro K4000, across different option classes. We present accuracy and speed-up results from using our hybrid lattice mesh model over an equivalent standard lattice implementation.
金融衍生品的定价是风险分析和实时交易中的一个重要问题。对更快、更准确定价的需求促使金融机构采用GPU技术,但这意味着我们需要专门为GPU架构设计的新定价算法。本研究采用一种常用的数值技术——格,设计了自适应的期权评估算法。通常晶格节点以高分辨率放置在固定网格上,但通过在低误差区域粗化网格,我们可以在不降低精度的情况下减少运行时间。我们展示了这种可适应的网格可以被设计成映射到基于warp的gpu的底层架构,在相同错误下更快的执行速度和相同执行速度下更低的错误之间提供权衡。我们在与平台无关的OpenCL中实现了该算法,并在Nvidia Quadro K4000上跨不同的选项类对其进行了评估。我们在等效的标准网格实现上使用我们的混合网格模型,给出了精度和加速结果。
{"title":"A GPU accelerated hybrid lattice-grid algorithm for options pricing","authors":"Joan O. Omeru, David B. Thomas","doi":"10.1109/HPCSim.2014.6903765","DOIUrl":"https://doi.org/10.1109/HPCSim.2014.6903765","url":null,"abstract":"The pricing of financial derivatives is an important problem in risk analysis and real-time trading. The need for faster and more accurate pricing has led financial institutions to adopt GPU technology, but this means we need new pricing algorithms designed specifically for GPU architectures. This research tackles the design of adaptable algorithms for option evaluation using lattices, a commonly used numerical technique. Usually lattice nodes are placed on a fixed grid at a high resolution, but by coarsening the grid in areas of low error, we can reduce run-time without a reduction in accuracy. We show that this adaptable grid can be designed to map onto the underlying architecture of warp-based GPUs, providing a tradeoff between faster execution at the same error, or lower error for the same execution speed. We implemented this algorithm in platform-independent OpenCL, and evaluated it on the Nvidia Quadro K4000, across different option classes. We present accuracy and speed-up results from using our hybrid lattice mesh model over an equivalent standard lattice implementation.","PeriodicalId":6469,"journal":{"name":"2014 International Conference on High Performance Computing & Simulation (HPCS)","volume":"4 12 1","pages":"758-765"},"PeriodicalIF":0.0,"publicationDate":"2014-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78474644","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
2014 International Conference on High Performance Computing & Simulation (HPCS)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1