Pub Date : 2014-07-21DOI: 10.1109/HPCSim.2014.6903730
Jih-Sheng Shen, Pao-Ann Hsiung, Juin-Ming Lu
Due to the need to support concurrent executions of versatile applications, the system complexity, in terms of the number of cores, is drastically increased from tens to hundreds or thousands of cores. These complex systems usually contain heterogeneous cores or processing elements such as different processor cores, memories, and several Silicon Intellectual Properties (SIPs). Network-on-chip (NoC) was proposed to provide scalability and higher throughput for these heterogeneous multi-core systems. However, general designs of NoC infrastructures for multi-core systems usually lack the flexibility to support different processing requirements such as performance, power, reliability, and response time. It is helpful if designers can provide a reconfigurable NoC design so that these requirements can be supported more easily. In this work, we take an existing reconfigurable NoC for example and discuss related hardware and software issues. Some issues such as the reconfiguration time overhead must be considered in the design of a reconfigurable NoC such that it can be used for heterogeneous multi-core systems.
{"title":"Reconfigurable Network-on-chip design for heterogeneous multi-core system architecture","authors":"Jih-Sheng Shen, Pao-Ann Hsiung, Juin-Ming Lu","doi":"10.1109/HPCSim.2014.6903730","DOIUrl":"https://doi.org/10.1109/HPCSim.2014.6903730","url":null,"abstract":"Due to the need to support concurrent executions of versatile applications, the system complexity, in terms of the number of cores, is drastically increased from tens to hundreds or thousands of cores. These complex systems usually contain heterogeneous cores or processing elements such as different processor cores, memories, and several Silicon Intellectual Properties (SIPs). Network-on-chip (NoC) was proposed to provide scalability and higher throughput for these heterogeneous multi-core systems. However, general designs of NoC infrastructures for multi-core systems usually lack the flexibility to support different processing requirements such as performance, power, reliability, and response time. It is helpful if designers can provide a reconfigurable NoC design so that these requirements can be supported more easily. In this work, we take an existing reconfigurable NoC for example and discuss related hardware and software issues. Some issues such as the reconfiguration time overhead must be considered in the design of a reconfigurable NoC such that it can be used for heterogeneous multi-core systems.","PeriodicalId":6469,"journal":{"name":"2014 International Conference on High Performance Computing & Simulation (HPCS)","volume":"1 1","pages":"523-526"},"PeriodicalIF":0.0,"publicationDate":"2014-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87371168","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-07-21DOI: 10.1109/HPCSim.2014.6903754
A. Asaithambi, V. Valev, A. Krzyżak, V. Zeljkovic
This paper explores feature selection and combining classifiers when binary features are used. The concept of Non-Reducible Descriptors (NRDs) for binary features is introduced. NRDs are descriptors of patterns that do not contain any redundant information. The underlying mathematical model for the present approach is based on learning Boolean formulas which are used to represent NRDs as conjunctions. Starting with a description of a computational procedure for the construction of all NRDs for a pattern, a two-step solution method is presented for the feature selection problem. The method computes weights of features during the construction of NRDs in the first step. The second step in the method then updates these weights based on repeated occurrences of features in the constructed NRDs. The paper then proceeds to present a new procedure for combining classifiers based on the votes computed for different classifiers. This procedure uses three different approaches for obtaining the single combined classifier, using majority, averaging, and randomized vote.
{"title":"A new approach for binary feature selection and combining classifiers","authors":"A. Asaithambi, V. Valev, A. Krzyżak, V. Zeljkovic","doi":"10.1109/HPCSim.2014.6903754","DOIUrl":"https://doi.org/10.1109/HPCSim.2014.6903754","url":null,"abstract":"This paper explores feature selection and combining classifiers when binary features are used. The concept of Non-Reducible Descriptors (NRDs) for binary features is introduced. NRDs are descriptors of patterns that do not contain any redundant information. The underlying mathematical model for the present approach is based on learning Boolean formulas which are used to represent NRDs as conjunctions. Starting with a description of a computational procedure for the construction of all NRDs for a pattern, a two-step solution method is presented for the feature selection problem. The method computes weights of features during the construction of NRDs in the first step. The second step in the method then updates these weights based on repeated occurrences of features in the constructed NRDs. The paper then proceeds to present a new procedure for combining classifiers based on the votes computed for different classifiers. This procedure uses three different approaches for obtaining the single combined classifier, using majority, averaging, and randomized vote.","PeriodicalId":6469,"journal":{"name":"2014 International Conference on High Performance Computing & Simulation (HPCS)","volume":"49 1","pages":"681-687"},"PeriodicalIF":0.0,"publicationDate":"2014-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82247846","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-07-21DOI: 10.1109/HPCSim.2014.6903798
P. Grani
Wants to be an excursus on the different solutions in which an optical Network-on-Chip (NoC) could be applied to, starting from passive NoC topologies (Mesh/Torus) enhanced by a simple shared optical ring and moving to more complex all-optical reconfigurable networks, in a state-of-the-art coherence assisted Chip-Multi-Processor (CMP). We investigate performance and power consumption effects on a CMP comparing them against a standard electronic Mesh (passive) and both a standard Torus (electronic baseline) and an optical Torus with sequential path-setup done through a symmetric electronic helper network (optical baseline, active).
{"title":"From hybrid electro-photonic to all-optical on-chip interconnections for future CMPs","authors":"P. Grani","doi":"10.1109/HPCSim.2014.6903798","DOIUrl":"https://doi.org/10.1109/HPCSim.2014.6903798","url":null,"abstract":"Wants to be an excursus on the different solutions in which an optical Network-on-Chip (NoC) could be applied to, starting from passive NoC topologies (Mesh/Torus) enhanced by a simple shared optical ring and moving to more complex all-optical reconfigurable networks, in a state-of-the-art coherence assisted Chip-Multi-Processor (CMP). We investigate performance and power consumption effects on a CMP comparing them against a standard electronic Mesh (passive) and both a standard Torus (electronic baseline) and an optical Torus with sequential path-setup done through a symmetric electronic helper network (optical baseline, active).","PeriodicalId":6469,"journal":{"name":"2014 International Conference on High Performance Computing & Simulation (HPCS)","volume":"16 1","pages":"999-1001"},"PeriodicalIF":0.0,"publicationDate":"2014-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82728423","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-07-21DOI: 10.1109/HPCSim.2014.6903712
Daniela Loreti, A. Ciampolini
Cloud Computing is a crucial computational paradigm for modern companies because it can discharge them from managing their ever growing IT infrastructure. Dynamically offering a plenty of computational resources, the cloud can also simplify the execution of CPU-intensive applications. Modern data centers for cloud computing are facing the challenge of a growing complexity due to the increasing number of users and their augmenting resource requests. A lot of efforts are now concentrated on providing the cloud infrastructure with autonomic behavior, so that it can take decisions about virtual machine (VM) management across the datacenter's nodes without human intervention. While the major part of these solutions is intrinsically centralized and suffers of scalability and reliability problems, we investigate the possibility to provide the cloud with a decentralized self-organizing behavior. To this purpose we present a novel VM migration policy suitable for a distributed environment, where hosts can exchange status information with each other according to a predefined protocol. The main goal of the policy is to balance the computational load on datacenter's physical hosts by conveniently moving virtual machines (VMs). We tested the policy performance by means of an ad hoc built simulator.
{"title":"A distributed self-balancing policy for virtual machine management in cloud datacenters","authors":"Daniela Loreti, A. Ciampolini","doi":"10.1109/HPCSim.2014.6903712","DOIUrl":"https://doi.org/10.1109/HPCSim.2014.6903712","url":null,"abstract":"Cloud Computing is a crucial computational paradigm for modern companies because it can discharge them from managing their ever growing IT infrastructure. Dynamically offering a plenty of computational resources, the cloud can also simplify the execution of CPU-intensive applications. Modern data centers for cloud computing are facing the challenge of a growing complexity due to the increasing number of users and their augmenting resource requests. A lot of efforts are now concentrated on providing the cloud infrastructure with autonomic behavior, so that it can take decisions about virtual machine (VM) management across the datacenter's nodes without human intervention. While the major part of these solutions is intrinsically centralized and suffers of scalability and reliability problems, we investigate the possibility to provide the cloud with a decentralized self-organizing behavior. To this purpose we present a novel VM migration policy suitable for a distributed environment, where hosts can exchange status information with each other according to a predefined protocol. The main goal of the policy is to balance the computational load on datacenter's physical hosts by conveniently moving virtual machines (VMs). We tested the policy performance by means of an ad hoc built simulator.","PeriodicalId":6469,"journal":{"name":"2014 International Conference on High Performance Computing & Simulation (HPCS)","volume":"22 1","pages":"391-398"},"PeriodicalIF":0.0,"publicationDate":"2014-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80598988","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-07-21DOI: 10.1109/HPCSim.2014.6903751
V. Zeljkovic, Du Zhang, V. Valev, Zhongyu Zhang, Sheng-Jun Zhu, Junjie Li
Real time automated personal access control system is proposed in order to detect the moving objects, localize, extract and recognize their faces in real image sequence. The described method encompasses two important issues in personal access control system that receives increased attention over years: moving object detection and face recognition. It is tested on personal access controlled area video testing. The efficiency of the described system is illustrated on four real world interior video sequences recorded in indoor/outdoor mixed environment with slight illumination changes.
{"title":"Personal access control system using moving object detection and face recognition","authors":"V. Zeljkovic, Du Zhang, V. Valev, Zhongyu Zhang, Sheng-Jun Zhu, Junjie Li","doi":"10.1109/HPCSim.2014.6903751","DOIUrl":"https://doi.org/10.1109/HPCSim.2014.6903751","url":null,"abstract":"Real time automated personal access control system is proposed in order to detect the moving objects, localize, extract and recognize their faces in real image sequence. The described method encompasses two important issues in personal access control system that receives increased attention over years: moving object detection and face recognition. It is tested on personal access controlled area video testing. The efficiency of the described system is illustrated on four real world interior video sequences recorded in indoor/outdoor mixed environment with slight illumination changes.","PeriodicalId":6469,"journal":{"name":"2014 International Conference on High Performance Computing & Simulation (HPCS)","volume":"36 1","pages":"662-669"},"PeriodicalIF":0.0,"publicationDate":"2014-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89496551","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-07-21DOI: 10.1109/HPCSim.2014.6903803
M. Flatz, M. Vajtersic
Nonnegative Matrix Factorization (NMF) is a technique to approximate a large nonnegative matrix as a product of two significantly smaller nonnegative matrices. Since matrices can be seen as second-order tensors, NMF can be generalized to Nonnegative Tensor Factorization (NTF). To compute an NTF, the tensor problem can be transformed into a matrix problem by using matricization. Any NMF algorithm can be used to process such a matricized tensor, including a method based on Newton iteration. Here, an approach will be presented to adopt our parallel design of the Newton algorithm for NMF to compute an NTF in parallel for tensors of any order.
{"title":"Parallel nonnegative tensor factorization via newton iteration on matrices","authors":"M. Flatz, M. Vajtersic","doi":"10.1109/HPCSim.2014.6903803","DOIUrl":"https://doi.org/10.1109/HPCSim.2014.6903803","url":null,"abstract":"Nonnegative Matrix Factorization (NMF) is a technique to approximate a large nonnegative matrix as a product of two significantly smaller nonnegative matrices. Since matrices can be seen as second-order tensors, NMF can be generalized to Nonnegative Tensor Factorization (NTF). To compute an NTF, the tensor problem can be transformed into a matrix problem by using matricization. Any NMF algorithm can be used to process such a matricized tensor, including a method based on Newton iteration. Here, an approach will be presented to adopt our parallel design of the Newton algorithm for NMF to compute an NTF in parallel for tensors of any order.","PeriodicalId":6469,"journal":{"name":"2014 International Conference on High Performance Computing & Simulation (HPCS)","volume":"6 11-12","pages":"1014-1015"},"PeriodicalIF":0.0,"publicationDate":"2014-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91500775","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-07-21DOI: 10.1109/HPCSim.2014.6903807
Giovanni Ponti, Filippo Palombi, D. Abate, F. Ambrosino, G. Aprea, T. Bastianelli, F. Beone, R. Bertini, G. Bracco, M. Caporicci, B. Calosso, M. Chinnici, Antonio Colavincenzo, A. Cucurullo, P. Dangelo, M. D. Rosa, P. D. Michele, A. Funel, G. Furini, Dante Giammattei, S. Giusepponi, R. Guadagni, G. Guarnieri, A. Italiano, S. Magagnino, Angelo Mariano, G. Mencuccini, C. Mercuri, S. Migliori, P. Ornelli, S. Pecoraro, A. Perozziello, S. Pierattini, S. Podda, F. Poggi, A. Quintiliani, A. Rocchi, C. Sciò, F. Simoni, A. Vita
Medium size HPC clusters play an important role in the HPC landscape in that they provide both the training environment for system scalability and a flexible production field for a large class of numerical problems. In this poster we present CRESCO4, the latest medium size HPC cluster purchased by ENEA, in operation since few months. CRESCO4 is part of a family of HPC systems, all integrated within ENEAGRID, a large infrastructure for cloud computing, which includes all the computational facilities installed at several ENEA sites in Italy.
{"title":"The role of medium size facilities in the HPC ecosystem: the case of the new CRESCO4 cluster integrated in the ENEAGRID infrastructure","authors":"Giovanni Ponti, Filippo Palombi, D. Abate, F. Ambrosino, G. Aprea, T. Bastianelli, F. Beone, R. Bertini, G. Bracco, M. Caporicci, B. Calosso, M. Chinnici, Antonio Colavincenzo, A. Cucurullo, P. Dangelo, M. D. Rosa, P. D. Michele, A. Funel, G. Furini, Dante Giammattei, S. Giusepponi, R. Guadagni, G. Guarnieri, A. Italiano, S. Magagnino, Angelo Mariano, G. Mencuccini, C. Mercuri, S. Migliori, P. Ornelli, S. Pecoraro, A. Perozziello, S. Pierattini, S. Podda, F. Poggi, A. Quintiliani, A. Rocchi, C. Sciò, F. Simoni, A. Vita","doi":"10.1109/HPCSim.2014.6903807","DOIUrl":"https://doi.org/10.1109/HPCSim.2014.6903807","url":null,"abstract":"Medium size HPC clusters play an important role in the HPC landscape in that they provide both the training environment for system scalability and a flexible production field for a large class of numerical problems. In this poster we present CRESCO4, the latest medium size HPC cluster purchased by ENEA, in operation since few months. CRESCO4 is part of a family of HPC systems, all integrated within ENEAGRID, a large infrastructure for cloud computing, which includes all the computational facilities installed at several ENEA sites in Italy.","PeriodicalId":6469,"journal":{"name":"2014 International Conference on High Performance Computing & Simulation (HPCS)","volume":"60 1","pages":"1030-1033"},"PeriodicalIF":0.0,"publicationDate":"2014-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73833970","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-07-21DOI: 10.1109/HPCSim.2014.6903676
Matthias Lieber, W. Nagel
The decomposition of one-dimensional workload arrays into consecutive partitions is a core problem of many load balancing methods, especially those based on space-filling curves. While previous work has shown that heuristics can be parallelized, only sequential algorithms exist for the optimal solution. However, centralized partitioning will become infeasible in the exascale era due to the vast amount of tasks to be mapped to millions of processors. In this work, we first introduce optimizations to a published exact algorithm. Further, we investigate a hierarchical approach which combines a parallel heuristic and an exact algorithm to form a scalable and high-quality 1D partitioning algorithm. We compare load balance, execution time, and task migration of the algorithms for up to 262 144 processes using real-life workload data. The results show a 300 times speed-up compared to an existing fast exact algorithm, while achieving nearly the optimal load balance.
{"title":"Scalable high-quality 1D partitioning","authors":"Matthias Lieber, W. Nagel","doi":"10.1109/HPCSim.2014.6903676","DOIUrl":"https://doi.org/10.1109/HPCSim.2014.6903676","url":null,"abstract":"The decomposition of one-dimensional workload arrays into consecutive partitions is a core problem of many load balancing methods, especially those based on space-filling curves. While previous work has shown that heuristics can be parallelized, only sequential algorithms exist for the optimal solution. However, centralized partitioning will become infeasible in the exascale era due to the vast amount of tasks to be mapped to millions of processors. In this work, we first introduce optimizations to a published exact algorithm. Further, we investigate a hierarchical approach which combines a parallel heuristic and an exact algorithm to form a scalable and high-quality 1D partitioning algorithm. We compare load balance, execution time, and task migration of the algorithms for up to 262 144 processes using real-life workload data. The results show a 300 times speed-up compared to an existing fast exact algorithm, while achieving nearly the optimal load balance.","PeriodicalId":6469,"journal":{"name":"2014 International Conference on High Performance Computing & Simulation (HPCS)","volume":"137 1","pages":"112-119"},"PeriodicalIF":0.0,"publicationDate":"2014-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75549945","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-07-21DOI: 10.1109/HPCSim.2014.6903799
S. Fremal, P. Manneback
The delivery of data to computing ressources in a short time is a crucial issue for the effectiveness of High Performance Computing. We meet this issue when, for example, designing drivers for virtual machines. We developped two tools to speed up data transfers between Xen virtual machines. The first one is a circular buffer shared in user memory space between the two communicating domains and allowing transfers without copy. The second pins pages in memory and transfers their Machine Frame Number (MFN), significantly reducing the transfered data volume. This paper briefly unveils the architecture of our tools and compare them with TCP sockets and XenSocket, a circular buffer in kernel memory space.
{"title":"Optimizing Xen inter-domain data transfer","authors":"S. Fremal, P. Manneback","doi":"10.1109/HPCSim.2014.6903799","DOIUrl":"https://doi.org/10.1109/HPCSim.2014.6903799","url":null,"abstract":"The delivery of data to computing ressources in a short time is a crucial issue for the effectiveness of High Performance Computing. We meet this issue when, for example, designing drivers for virtual machines. We developped two tools to speed up data transfers between Xen virtual machines. The first one is a circular buffer shared in user memory space between the two communicating domains and allowing transfers without copy. The second pins pages in memory and transfers their Machine Frame Number (MFN), significantly reducing the transfered data volume. This paper briefly unveils the architecture of our tools and compare them with TCP sockets and XenSocket, a circular buffer in kernel memory space.","PeriodicalId":6469,"journal":{"name":"2014 International Conference on High Performance Computing & Simulation (HPCS)","volume":"514 1-2 1","pages":"1002-1004"},"PeriodicalIF":0.0,"publicationDate":"2014-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78403101","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-07-21DOI: 10.1109/HPCSim.2014.6903740
Mattia Salnitri, P. Giorgini
High Performance Computing (HPC) techniques are essential in complex systems such as Socio-Technical Systems (STSs), where humans and organizations are elements of the same system along with technical infrastructures and hardware/software components. For example, several HPC approaches have been successfully applied to support and facilitate distribution or aggregation of computation power among independent and atomic components (e.g., smart meters to solve and/or simulate complex models). However, HPC techniques have to be studied and developed without underestimating the problem of security that, given the interaction-centric nature of STSs, has to be considered not only from the single component perspective but for the system as a whole. In our previous work, we have proposed SecBPMN, a framework to support the design of secure STSs. It is used to model the interaction design and security policies of a STS and it supports their verification through a querying engine. In this paper, we describe how SecBPMN has been successfully used for the study of security in an Air Traffic Management (ATM) system, and we show how it can result also an efficient support when of HPC techniques when applied in complex and heterogeneous environments.
{"title":"Modeling and verification of ATM security policies with SecBPMN","authors":"Mattia Salnitri, P. Giorgini","doi":"10.1109/HPCSim.2014.6903740","DOIUrl":"https://doi.org/10.1109/HPCSim.2014.6903740","url":null,"abstract":"High Performance Computing (HPC) techniques are essential in complex systems such as Socio-Technical Systems (STSs), where humans and organizations are elements of the same system along with technical infrastructures and hardware/software components. For example, several HPC approaches have been successfully applied to support and facilitate distribution or aggregation of computation power among independent and atomic components (e.g., smart meters to solve and/or simulate complex models). However, HPC techniques have to be studied and developed without underestimating the problem of security that, given the interaction-centric nature of STSs, has to be considered not only from the single component perspective but for the system as a whole. In our previous work, we have proposed SecBPMN, a framework to support the design of secure STSs. It is used to model the interaction design and security policies of a STS and it supports their verification through a querying engine. In this paper, we describe how SecBPMN has been successfully used for the study of security in an Air Traffic Management (ATM) system, and we show how it can result also an efficient support when of HPC techniques when applied in complex and heterogeneous environments.","PeriodicalId":6469,"journal":{"name":"2014 International Conference on High Performance Computing & Simulation (HPCS)","volume":"29 1","pages":"588-591"},"PeriodicalIF":0.0,"publicationDate":"2014-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76743643","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}