Pub Date : 2012-07-02DOI: 10.1109/HPCSim.2012.6266920
K. Hofmann
3D ICs are emerging as a promising solution for scalability, power and performance demands of next generation Systems-on-Chip (SoCs). Along with the advantages, it also imposes a number of challenges with respect to cost, technological reliability, thermal budget, integration and so forth. Networks-on-chips (NoCs), which are thoroughly investigated in 2D SoCs design as scalable interconnects, are also well relevant to 3D IC Design. In this paper, special challenges for NoC interconnect architectures design, such as the need for high throughput and/or low latency, high reliability and low power consumption, are presented.
{"title":"Network-on-Chip: Challenges for the interconnect and I/O-architecture","authors":"K. Hofmann","doi":"10.1109/HPCSim.2012.6266920","DOIUrl":"https://doi.org/10.1109/HPCSim.2012.6266920","url":null,"abstract":"3D ICs are emerging as a promising solution for scalability, power and performance demands of next generation Systems-on-Chip (SoCs). Along with the advantages, it also imposes a number of challenges with respect to cost, technological reliability, thermal budget, integration and so forth. Networks-on-chips (NoCs), which are thoroughly investigated in 2D SoCs design as scalable interconnects, are also well relevant to 3D IC Design. In this paper, special challenges for NoC interconnect architectures design, such as the need for high throughput and/or low latency, high reliability and low power consumption, are presented.","PeriodicalId":428764,"journal":{"name":"2012 International Conference on High Performance Computing & Simulation (HPCS)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115335061","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-07-02DOI: 10.1109/HPCSim.2012.6267003
I. Pratomo, S. Pillement
In current embedded systems the communications infrastructure requires different Quality-of-Services (QoS) depending on the application domain. An adaptive Network-on-Chip (NoC) can adapt dynamically its characteristics to provide QoS requirements and flexible communication. Designing such a NoC is very time consuming. In this paper we evaluate the impact of NoC design parameters on the performances of an adaptive NoCs. The results on latency and throughput were evaluated using the Noxim simulator.
{"title":"Impact of design parameters on performance of adaptive Network-on-Chips","authors":"I. Pratomo, S. Pillement","doi":"10.1109/HPCSim.2012.6267003","DOIUrl":"https://doi.org/10.1109/HPCSim.2012.6267003","url":null,"abstract":"In current embedded systems the communications infrastructure requires different Quality-of-Services (QoS) depending on the application domain. An adaptive Network-on-Chip (NoC) can adapt dynamically its characteristics to provide QoS requirements and flexible communication. Designing such a NoC is very time consuming. In this paper we evaluate the impact of NoC design parameters on the performances of an adaptive NoCs. The results on latency and throughput were evaluated using the Noxim simulator.","PeriodicalId":428764,"journal":{"name":"2012 International Conference on High Performance Computing & Simulation (HPCS)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115428876","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-07-02DOI: 10.1109/HPCSim.2012.6266930
Marco Picone, M. Amoretti, M. Martalò, Erind Meco, F. Zanichelli, G. Ferrari
The widespread availability of connectivity to the Internet allows to share large amount of information generated by the most heterogeneous, possibly mobile, sources. One scenario where this situation arises is given by smart cities, which are envisioned to generate and consume relevant information about their statuses to enhance the security and lifestyle of their citizens. In this context, a very challenging question is how the information can be maintained and distributed among the city itself. In this paper, we propose a system architecture based on the creation of a distributed geographic overlay network, which allows to achieve the desired goals. Moreover, information is redundantly encoded by means of randomized network coding, in order to dynamically and distributedly preserve the resource availability. By means of simulations, we investigate the behavior of the proposed solution, in terms of efficiency and speed in data publication/search, as well as resource availability and storage occupancy requirements.
{"title":"A joint peer-to-peer and network coding approach for large scale information management","authors":"Marco Picone, M. Amoretti, M. Martalò, Erind Meco, F. Zanichelli, G. Ferrari","doi":"10.1109/HPCSim.2012.6266930","DOIUrl":"https://doi.org/10.1109/HPCSim.2012.6266930","url":null,"abstract":"The widespread availability of connectivity to the Internet allows to share large amount of information generated by the most heterogeneous, possibly mobile, sources. One scenario where this situation arises is given by smart cities, which are envisioned to generate and consume relevant information about their statuses to enhance the security and lifestyle of their citizens. In this context, a very challenging question is how the information can be maintained and distributed among the city itself. In this paper, we propose a system architecture based on the creation of a distributed geographic overlay network, which allows to achieve the desired goals. Moreover, information is redundantly encoded by means of randomized network coding, in order to dynamically and distributedly preserve the resource availability. By means of simulations, we investigate the behavior of the proposed solution, in terms of efficiency and speed in data publication/search, as well as resource availability and storage occupancy requirements.","PeriodicalId":428764,"journal":{"name":"2012 International Conference on High Performance Computing & Simulation (HPCS)","volume":"84 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124118854","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-07-02DOI: 10.1109/HPCSim.2012.6266945
C. Trinitis
In recent years, there has been quite a hype on porting compute intensive kernels to GPUs, claiming impressive speedups of sometimes up to more than 100. However, looking at a number of compute intensive applications that have been investigated at TUM, the outcome looks slightly different. In addition, the overhead for porting applications to GPUs, or, more generally speaking, to accelerators, need be taken into consideration. As both very promising and very disappointing results can be obtained on accelerators (depending on the application), as usual the community is divided into GPU enthusiasts on the one hand and GPU opponents on the other hand. In both industrial and academic practice, the question arises what to do with existing compute intensive applications (often numerical simulation codes) that have existed for years or even decades, and which are treated as “never change a running system” code. Basically, these can be divided into three categories: - Code that should not be touched as it most likely will no longer run if anything will be modified (complete rewrite required if it is to run efficiently) - Code where compute intensive parts can be rewritten (partial rewrite required), and - Code that can easily be ported to new programming paradigms (easy adapting possible). Given the fact that CPUs integrate more and more features known from accelerators, one could conclude that most codes would fall into the third category, as the required porting effort seems to be shrinking and compilers are constantly improving. However, although features like automatic parallelization can be carried out with compilers, tuning by hand coding or using hardware specific programming paradigms still outperforms generic approaches. As GPU enthusiasts are mainly keen on using CUDA (with some of them moving to OpenCL), GPU opponents claim that by hardcore optimization of compute intensive numerical code, CPUs can reach equal or even better results than accelerators, hence taking vector units operating on AVX registers as on chip accelerators. In order to satisfy both CPU and accelerator programmers, it is still not clear which programming interface will eventually turn out to become a de facto standard. Next to GPUs by NVIDIA and AMD, another interesting approach in the accelerator world is Intel's MIC architecture, with a couple of supercomputing projects already being built around this architecture. As it is based on the x86 ISA including the full tool chain from compilers to debuggers to performance analysis tools, MIC aims at minimizing porting effort to accelerators from the programmer's point of view. The talk will present examples from high performance computing that fall into the three abovementioned categories, and how these code examples have been adapted to modern processor and accelerator architectures.
{"title":"Is GPU enthusiasm vanishing?","authors":"C. Trinitis","doi":"10.1109/HPCSim.2012.6266945","DOIUrl":"https://doi.org/10.1109/HPCSim.2012.6266945","url":null,"abstract":"In recent years, there has been quite a hype on porting compute intensive kernels to GPUs, claiming impressive speedups of sometimes up to more than 100. However, looking at a number of compute intensive applications that have been investigated at TUM, the outcome looks slightly different. In addition, the overhead for porting applications to GPUs, or, more generally speaking, to accelerators, need be taken into consideration. As both very promising and very disappointing results can be obtained on accelerators (depending on the application), as usual the community is divided into GPU enthusiasts on the one hand and GPU opponents on the other hand. In both industrial and academic practice, the question arises what to do with existing compute intensive applications (often numerical simulation codes) that have existed for years or even decades, and which are treated as “never change a running system” code. Basically, these can be divided into three categories: - Code that should not be touched as it most likely will no longer run if anything will be modified (complete rewrite required if it is to run efficiently) - Code where compute intensive parts can be rewritten (partial rewrite required), and - Code that can easily be ported to new programming paradigms (easy adapting possible). Given the fact that CPUs integrate more and more features known from accelerators, one could conclude that most codes would fall into the third category, as the required porting effort seems to be shrinking and compilers are constantly improving. However, although features like automatic parallelization can be carried out with compilers, tuning by hand coding or using hardware specific programming paradigms still outperforms generic approaches. As GPU enthusiasts are mainly keen on using CUDA (with some of them moving to OpenCL), GPU opponents claim that by hardcore optimization of compute intensive numerical code, CPUs can reach equal or even better results than accelerators, hence taking vector units operating on AVX registers as on chip accelerators. In order to satisfy both CPU and accelerator programmers, it is still not clear which programming interface will eventually turn out to become a de facto standard. Next to GPUs by NVIDIA and AMD, another interesting approach in the accelerator world is Intel's MIC architecture, with a couple of supercomputing projects already being built around this architecture. As it is based on the x86 ISA including the full tool chain from compilers to debuggers to performance analysis tools, MIC aims at minimizing porting effort to accelerators from the programmer's point of view. The talk will present examples from high performance computing that fall into the three abovementioned categories, and how these code examples have been adapted to modern processor and accelerator architectures.","PeriodicalId":428764,"journal":{"name":"2012 International Conference on High Performance Computing & Simulation (HPCS)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124531431","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-07-02DOI: 10.1109/HPCSim.2012.6266951
Yong Yue, C. Maple, Dayou Li, Zuobin Wang
This project focuses on staff exchanging between the partners, especially between the partners of EU and China, on the researching and developing newmaskless laser nanoscale manufacturing technologies for low cost and high efficiency manufacturing of nano structured surfaces and components including periodic structures (nano gratings, anticounterfeiting security markers, nanoimprint templates, self-cleaning, antireflection surface nanostructures, and nano sensors) and other arbitrary features for both 2D and 3D applications. The target feature size will be down to ~10 nm in the selected applications for maskless laser nanoscale manufacturing.
{"title":"Case study: Laser nanoscale manufacturing (LaserNaMi)","authors":"Yong Yue, C. Maple, Dayou Li, Zuobin Wang","doi":"10.1109/HPCSim.2012.6266951","DOIUrl":"https://doi.org/10.1109/HPCSim.2012.6266951","url":null,"abstract":"This project focuses on staff exchanging between the partners, especially between the partners of EU and China, on the researching and developing newmaskless laser nanoscale manufacturing technologies for low cost and high efficiency manufacturing of nano structured surfaces and components including periodic structures (nano gratings, anticounterfeiting security markers, nanoimprint templates, self-cleaning, antireflection surface nanostructures, and nano sensors) and other arbitrary features for both 2D and 3D applications. The target feature size will be down to ~10 nm in the selected applications for maskless laser nanoscale manufacturing.","PeriodicalId":428764,"journal":{"name":"2012 International Conference on High Performance Computing & Simulation (HPCS)","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114507303","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-07-02DOI: 10.1109/HPCSim.2012.6266973
Pablo Barrio, C. Carreras, Roberto Sierra, Tobias Kenter, Christian Plessl
Heterogeneous machines are gaining momentum in the High Performance Computing field, due to the theoretical speedups and power consumption. In practice, while some applications meet the performance expectations, heterogeneous architectures still require a tremendous effort from the application developers. This work presents a code generation method to port codes into heterogeneous platforms, based on transformations of the control flow into function calls. The results show that the cost of the function-call mechanism is affordable for the tested HPC kernels. The complete toolchain, based on the LLVM compiler infrastructure, is fully automated once the sequential specification is provided.
{"title":"Turning control flow graphs into function calls: Code generation for heterogeneous architectures","authors":"Pablo Barrio, C. Carreras, Roberto Sierra, Tobias Kenter, Christian Plessl","doi":"10.1109/HPCSim.2012.6266973","DOIUrl":"https://doi.org/10.1109/HPCSim.2012.6266973","url":null,"abstract":"Heterogeneous machines are gaining momentum in the High Performance Computing field, due to the theoretical speedups and power consumption. In practice, while some applications meet the performance expectations, heterogeneous architectures still require a tremendous effort from the application developers. This work presents a code generation method to port codes into heterogeneous platforms, based on transformations of the control flow into function calls. The results show that the cost of the function-call mechanism is affordable for the tested HPC kernels. The complete toolchain, based on the LLVM compiler infrastructure, is fully automated once the sequential specification is provided.","PeriodicalId":428764,"journal":{"name":"2012 International Conference on High Performance Computing & Simulation (HPCS)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114509047","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-07-02DOI: 10.1109/HPCSim.2012.6266893
Andrea Sansottera, Davide Zoni, P. Cremonesi, W. Fornaciari
Server consolidation leverages hardware virtualization to reduce the operational cost of data centers through the intelligent placement of existing workloads. This work proposes a consolidation model that considers power, performance and reliability aspects simultaneously. There are two main innovative contributions in the model, focused on performance and reliability requirements. The first contribution is the possibility to guarantee average response time constraints for multi-tier workloads. The second contribution is the possibility to model active/active clusters of servers, with enough spare capacity on the fail-over servers to manage the load of the failed ones. At the heart of the proposal is a non-linear optimization model that has been linearized using two different exact techniques. Moreover, a heuristic method that allows for the fast computation of near optimal solutions has been developed and validated.
{"title":"Consolidation of multi-tier workloads with performance and reliability constraints","authors":"Andrea Sansottera, Davide Zoni, P. Cremonesi, W. Fornaciari","doi":"10.1109/HPCSim.2012.6266893","DOIUrl":"https://doi.org/10.1109/HPCSim.2012.6266893","url":null,"abstract":"Server consolidation leverages hardware virtualization to reduce the operational cost of data centers through the intelligent placement of existing workloads. This work proposes a consolidation model that considers power, performance and reliability aspects simultaneously. There are two main innovative contributions in the model, focused on performance and reliability requirements. The first contribution is the possibility to guarantee average response time constraints for multi-tier workloads. The second contribution is the possibility to model active/active clusters of servers, with enough spare capacity on the fail-over servers to manage the load of the failed ones. At the heart of the proposal is a non-linear optimization model that has been linearized using two different exact techniques. Moreover, a heuristic method that allows for the fast computation of near optimal solutions has been developed and validated.","PeriodicalId":428764,"journal":{"name":"2012 International Conference on High Performance Computing & Simulation (HPCS)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129367762","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-07-02DOI: 10.1109/HPCSim.2012.6266889
G. Manno, W. Smari, L. Spalazzi
Cloud Computing is a paradigm that applies a service model on infrastructures, platforms and software. In the last few years, this new idea has been showing its potentials and how, in the long run, it will affect Information Technology and the act of interfacing to computation and storage. This article introduces the FCFA project, a framework for an ontology-based resource life-cycle management and provisioning in a federated Cloud Computing infrastructure. Federated Clouds are presumably the first step toward a Cloud 2.0 scenario where different providers will be able to share their assets in order to create a free and open Cloud Computing marketplace. The contribution of this article is a redesign of a Cloud Computing infrastructure architecture from the ground-up, leveraging semantic web technologies and natively supporting a federated resource provisioning.
{"title":"FCFA: A semantic-based federated cloud framework architecture","authors":"G. Manno, W. Smari, L. Spalazzi","doi":"10.1109/HPCSim.2012.6266889","DOIUrl":"https://doi.org/10.1109/HPCSim.2012.6266889","url":null,"abstract":"Cloud Computing is a paradigm that applies a service model on infrastructures, platforms and software. In the last few years, this new idea has been showing its potentials and how, in the long run, it will affect Information Technology and the act of interfacing to computation and storage. This article introduces the FCFA project, a framework for an ontology-based resource life-cycle management and provisioning in a federated Cloud Computing infrastructure. Federated Clouds are presumably the first step toward a Cloud 2.0 scenario where different providers will be able to share their assets in order to create a free and open Cloud Computing marketplace. The contribution of this article is a redesign of a Cloud Computing infrastructure architecture from the ground-up, leveraging semantic web technologies and natively supporting a federated resource provisioning.","PeriodicalId":428764,"journal":{"name":"2012 International Conference on High Performance Computing & Simulation (HPCS)","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127077220","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-07-02DOI: 10.1109/HPCSim.2012.6266983
I. Petrisor, M. Negrea, C. Lalescu, D. Carati
This study is devoted to the calculation of diffusion coefficients for a particle moving in fluctuating electrostatic field superposed to a space dependent and sheared magnetic field, using the numerical simulation.
本文采用数值模拟的方法,研究了粒子在空间依赖剪切磁场叠加的波动静电场中运动的扩散系数。
{"title":"Particle diffusion in prescribed electrostatic turbulence and sheared space dependent magnetic field","authors":"I. Petrisor, M. Negrea, C. Lalescu, D. Carati","doi":"10.1109/HPCSim.2012.6266983","DOIUrl":"https://doi.org/10.1109/HPCSim.2012.6266983","url":null,"abstract":"This study is devoted to the calculation of diffusion coefficients for a particle moving in fluctuating electrostatic field superposed to a space dependent and sheared magnetic field, using the numerical simulation.","PeriodicalId":428764,"journal":{"name":"2012 International Conference on High Performance Computing & Simulation (HPCS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129089879","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-07-02DOI: 10.1109/HPCSim.2012.6266929
Carmen Guerrero López
Summary form only. BitTorrent is the most successful peer-to-peer application. In the last years the research community has studied the BitTorrent ecosystem by collecting data from real BitTorrent swarms using different measurement techniques. In this talk we present the first survey of these techniques that constitutes a first step in the design of future measurement techniques and tools for analyzing large scale systems. The techniques are classified into Macroscopic, Microscopic and Complementary. Macroscopic techniques allow to collect aggregated information of torrents and present a very high scalability being able to monitor up to hundreds of thousands of torrents in short periods of time. Rather, Microscopic techniques operate at the peer level and focus on understanding performance aspects such as the peers' download rates. They offer a higher granularity but do not scale as well as the Macroscopic techniques. Finally, Complementary techniques utilize recent extensions to the BitTorrent protocol in order to obtain both aggregated and peer level information. The talk also summarizes the main challenges faced by the research community to accurately measure the BitTorrent ecosystem such as accurately identifying peers or estimating peers' upload rates. Furthermore, we provide possible solutions to address the described challenges.
{"title":"Measuring BitTorrent ecosystems","authors":"Carmen Guerrero López","doi":"10.1109/HPCSim.2012.6266929","DOIUrl":"https://doi.org/10.1109/HPCSim.2012.6266929","url":null,"abstract":"Summary form only. BitTorrent is the most successful peer-to-peer application. In the last years the research community has studied the BitTorrent ecosystem by collecting data from real BitTorrent swarms using different measurement techniques. In this talk we present the first survey of these techniques that constitutes a first step in the design of future measurement techniques and tools for analyzing large scale systems. The techniques are classified into Macroscopic, Microscopic and Complementary. Macroscopic techniques allow to collect aggregated information of torrents and present a very high scalability being able to monitor up to hundreds of thousands of torrents in short periods of time. Rather, Microscopic techniques operate at the peer level and focus on understanding performance aspects such as the peers' download rates. They offer a higher granularity but do not scale as well as the Macroscopic techniques. Finally, Complementary techniques utilize recent extensions to the BitTorrent protocol in order to obtain both aggregated and peer level information. The talk also summarizes the main challenges faced by the research community to accurately measure the BitTorrent ecosystem such as accurately identifying peers or estimating peers' upload rates. Furthermore, we provide possible solutions to address the described challenges.","PeriodicalId":428764,"journal":{"name":"2012 International Conference on High Performance Computing & Simulation (HPCS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130192396","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}