R. Salvador, S. Ortega, D. Madroñal, H. Fabelo, R. Lazcano, G. Callicó, E. Juárez, R. Sarmiento, C. Sanz
The HELICoiD project is a European FP7 FET Open funded project. It is an interdisciplinary work at the edge of the biomedical domain, bringing together neurosurgeons, computer scientists and electronic engineers. The main target of the project was to provide a working demonstrator of an intraoperative image-guided surgery system for real-time brain cancer detection, in order to assist neurosurgeons during tumour resection procedures. One of the main problems associated to brain tumours is its infiltrative nature, which makes complete tumour resection a highly difficult task. With the combination of Hyperspectral Imaging and Machine Learning techniques, the project aimed at demonstrating that a precise determination of tumour boundaries was possible, helping this way neurosurgeons to minimize the amount of removed healthy tissue. The project partners involved, besides different universities and companies, two hospitals where the demonstrator was tested during surgical procedures. This paper introduces the difficulties around brain tumor resection, stating the main objectives of the project and presenting the materials, methodologies and platforms used to propose a solution. A brief summary of the main results obtained is also included.
HELICoiD项目是欧洲FP7 FET Open资助的项目。它是生物医学领域边缘的一项跨学科工作,汇集了神经外科医生、计算机科学家和电子工程师。该项目的主要目标是提供术中图像引导手术系统的工作演示,用于实时脑癌检测,以便在肿瘤切除过程中协助神经外科医生。与脑肿瘤相关的主要问题之一是其浸润性,这使得完全切除肿瘤是一项非常困难的任务。通过结合高光谱成像和机器学习技术,该项目旨在证明精确确定肿瘤边界是可能的,从而帮助神经外科医生最大限度地减少切除健康组织的数量。除了不同的大学和公司外,项目合作伙伴还涉及两家医院,在那里演示器在手术过程中进行了测试。本文介绍了脑肿瘤切除的困难,说明了该项目的主要目标,并介绍了提出解决方案所使用的材料、方法和平台。本文还简要总结了所获得的主要结果。
{"title":"HELICoiD: interdisciplinary and collaborative project for real-time brain cancer detection: Invited Paper","authors":"R. Salvador, S. Ortega, D. Madroñal, H. Fabelo, R. Lazcano, G. Callicó, E. Juárez, R. Sarmiento, C. Sanz","doi":"10.1145/3075564.3076262","DOIUrl":"https://doi.org/10.1145/3075564.3076262","url":null,"abstract":"The HELICoiD project is a European FP7 FET Open funded project. It is an interdisciplinary work at the edge of the biomedical domain, bringing together neurosurgeons, computer scientists and electronic engineers. The main target of the project was to provide a working demonstrator of an intraoperative image-guided surgery system for real-time brain cancer detection, in order to assist neurosurgeons during tumour resection procedures. One of the main problems associated to brain tumours is its infiltrative nature, which makes complete tumour resection a highly difficult task. With the combination of Hyperspectral Imaging and Machine Learning techniques, the project aimed at demonstrating that a precise determination of tumour boundaries was possible, helping this way neurosurgeons to minimize the amount of removed healthy tissue. The project partners involved, besides different universities and companies, two hospitals where the demonstrator was tested during surgical procedures. This paper introduces the difficulties around brain tumor resection, stating the main objectives of the project and presenting the materials, methodologies and platforms used to propose a solution. A brief summary of the main results obtained is also included.","PeriodicalId":398898,"journal":{"name":"Proceedings of the Computing Frontiers Conference","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-05-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125530182","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
D. Palossi, Andres Gomez, Stefan Draskovic, K. Keller, L. Benini, L. Thiele
Nowadays nano Unmanned Aerial Vehicles (UAV's), such as quad-copters, have very limited flight times, tens of minutes at most. The main constraints are energy density of the batteries and the engine power required for flight. In this work, we present a nano-sized blimp platform, consisting of a helium balloon and a rotorcraft. Thanks to the lift provided by helium, the blimp requires relatively little energy to remain at a stable altitude. We also introduce the concept of duty-cycling high power actuators, to reduce the energy requirements for hovering even further. With the addition of a solar panel, it is even feasible to sustain tens or hundreds of flight hours in modest lighting conditions (including indoor usage). A functioning 52 gram prototype was thoroughly characterized and its lifetime was measured in different harvesting conditions. Both our system model and the experimental results indicate our proposed platform requires less than 200 mW to hover in a self sustainable fashion. This represents, to the best of our knowledge, the first nano-size UAV for long term hovering with low power requirements.
{"title":"Self-Sustainability in Nano Unmanned Aerial Vehicles: A Blimp Case Study","authors":"D. Palossi, Andres Gomez, Stefan Draskovic, K. Keller, L. Benini, L. Thiele","doi":"10.1145/3075564.3075580","DOIUrl":"https://doi.org/10.1145/3075564.3075580","url":null,"abstract":"Nowadays nano Unmanned Aerial Vehicles (UAV's), such as quad-copters, have very limited flight times, tens of minutes at most. The main constraints are energy density of the batteries and the engine power required for flight. In this work, we present a nano-sized blimp platform, consisting of a helium balloon and a rotorcraft. Thanks to the lift provided by helium, the blimp requires relatively little energy to remain at a stable altitude. We also introduce the concept of duty-cycling high power actuators, to reduce the energy requirements for hovering even further. With the addition of a solar panel, it is even feasible to sustain tens or hundreds of flight hours in modest lighting conditions (including indoor usage). A functioning 52 gram prototype was thoroughly characterized and its lifetime was measured in different harvesting conditions. Both our system model and the experimental results indicate our proposed platform requires less than 200 mW to hover in a self sustainable fashion. This represents, to the best of our knowledge, the first nano-size UAV for long term hovering with low power requirements.","PeriodicalId":398898,"journal":{"name":"Proceedings of the Computing Frontiers Conference","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-05-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129547867","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Clouds hide the complexity of maintaining a physical infrastructure with a disadvantage: they also hide their internal workings. Should users need to know about these details e.g., to increase the reliability or performance of their applications, they would need to detect slight behavioural changes in the underlying system. Existing solutions for such purposes offer limited capabilities. This paper proposes a technique for predicting background workload by means of simulations that are providing knowledge of the underlying clouds to support activities like cloud orchestration or workflow enactment. We propose these predictions to select more suitable execution environments for scientific workflows. We validate the proposed prediction approach with a biochemical application.
{"title":"Cloud Workload Prediction by Means of Simulations","authors":"G. Kecskeméti, A. Kertész, Z. Németh","doi":"10.1145/3075564.3075589","DOIUrl":"https://doi.org/10.1145/3075564.3075589","url":null,"abstract":"Clouds hide the complexity of maintaining a physical infrastructure with a disadvantage: they also hide their internal workings. Should users need to know about these details e.g., to increase the reliability or performance of their applications, they would need to detect slight behavioural changes in the underlying system. Existing solutions for such purposes offer limited capabilities. This paper proposes a technique for predicting background workload by means of simulations that are providing knowledge of the underlying clouds to support activities like cloud orchestration or workflow enactment. We propose these predictions to select more suitable execution environments for scientific workflows. We validate the proposed prediction approach with a biochemical application.","PeriodicalId":398898,"journal":{"name":"Proceedings of the Computing Frontiers Conference","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-05-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115238310","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The "memory wall" problem requires not only the use of increasingly aggressive techniques designed to reduce the latency of memory system, but also the raise of more accurate memory metrics. C-AMAT, an extension of AMAT that considers both locality and concurrency of memory accesses, can evaluate the performance of modern memory system more accurately. However, C-AMAT only involves those cycles consumed by memory accesses, ignoring the blocked time caused by some techniques like hardware prefetch, which may result in inaccurate evaluation. In this paper, we propose a more comprehensive memory metric called Blocked C-AMAT (BC-AMAT). It extends C-AMAT to take the blocked cycles into consideration. Experimental results show that BC-AMAT correlates much better with IPC than C-AMAT does when a few prefetch strategies are applied both in single-core mode and multi-core mode. In addition, a case study is provided in which BC-AMAT is used to adjust prefetching degree dynamically. The result shows that BC-AMAT achieves higher performance improvement than C-AMAT, demonstrating its usefulness in system optimization. BC-AMAT is more accurate and comprehensive than C-AMAT in evaluating modern memory systems, meanwhile, provides more insight for architecture design.
{"title":"BC-AMAT: Considering Blocked Time in Memory System Measurement","authors":"Qi Yu, Libo Huang, Cheng Qian, Zhiying Wang","doi":"10.1145/3075564.3076264","DOIUrl":"https://doi.org/10.1145/3075564.3076264","url":null,"abstract":"The \"memory wall\" problem requires not only the use of increasingly aggressive techniques designed to reduce the latency of memory system, but also the raise of more accurate memory metrics. C-AMAT, an extension of AMAT that considers both locality and concurrency of memory accesses, can evaluate the performance of modern memory system more accurately. However, C-AMAT only involves those cycles consumed by memory accesses, ignoring the blocked time caused by some techniques like hardware prefetch, which may result in inaccurate evaluation. In this paper, we propose a more comprehensive memory metric called Blocked C-AMAT (BC-AMAT). It extends C-AMAT to take the blocked cycles into consideration. Experimental results show that BC-AMAT correlates much better with IPC than C-AMAT does when a few prefetch strategies are applied both in single-core mode and multi-core mode. In addition, a case study is provided in which BC-AMAT is used to adjust prefetching degree dynamically. The result shows that BC-AMAT achieves higher performance improvement than C-AMAT, demonstrating its usefulness in system optimization. BC-AMAT is more accurate and comprehensive than C-AMAT in evaluating modern memory systems, meanwhile, provides more insight for architecture design.","PeriodicalId":398898,"journal":{"name":"Proceedings of the Computing Frontiers Conference","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-05-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124723343","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Comfort zone is an artificial mental boundary within which you maintain a sense of security. A couple of years ago, PRACE (Partnership for Advanced Computing in Europe) challenged the technology providers of Europe in proposing new architectures, new concepts and building a High-Performance Computer system mixing old and proven technology with advanced new components. E4 took the challenge and proposed an innovative and uncomfortable approach: DAVIDE. The talk will present the rationale for the technological and architectural choices done for building DAVIDE, the key innovative concepts, the software ecosystems and some preliminary performance.
{"title":"Extending the comfort zone: DAVIDE","authors":"F. Magugliani","doi":"10.1145/3075564.3095084","DOIUrl":"https://doi.org/10.1145/3075564.3095084","url":null,"abstract":"Comfort zone is an artificial mental boundary within which you maintain a sense of security. A couple of years ago, PRACE (Partnership for Advanced Computing in Europe) challenged the technology providers of Europe in proposing new architectures, new concepts and building a High-Performance Computer system mixing old and proven technology with advanced new components. E4 took the challenge and proposed an innovative and uncomfortable approach: DAVIDE. The talk will present the rationale for the technological and architectural choices done for building DAVIDE, the key innovative concepts, the software ecosystems and some preliminary performance.","PeriodicalId":398898,"journal":{"name":"Proceedings of the Computing Frontiers Conference","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-05-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129169729","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper uses betweenness centrality as a case study to research efficient work stealing in a heterogeneous system environment. Betweenness centrality is an important algorithm in graph processing. It presents multiple-level parallelism and is an interesting problem to exploit various optimizations. We investigate queue-based work stealing to distribute its tasks across GPU compute units (CUs) and across the CPU and the GPU, which has not been done by prior work. In particular, we demonstrate how to leverage the new platform-atomic operations on AMD Accelerated Processing Units (APUs) to operate cross-device queues in a lock-free manner in shared virtual memory. To make the work stealing runtime and the application more efficient, we apply new architectural features, including atomic operations with different memory scopes and or-derings for different synchronization scenarios. We implement our solution using heterogeneous system architecture (HSA). Our results show that betweenness centrality with CPU-GPU work stealing achieves an average of 15% (up to 30%) performance improvement over GPU-only execution for diverse graph inputs. Our work stealing solution can be applied widely to other applications too. Finally, we analyze important parameters critical for queuing and stealing.
{"title":"Work Stealing in a Shared Virtual-Memory Heterogeneous Environment: A Case Study with Betweenness Centrality","authors":"Shuai Che, Marc S. Orr, J. Gallmeier","doi":"10.1145/3075564.3075567","DOIUrl":"https://doi.org/10.1145/3075564.3075567","url":null,"abstract":"This paper uses betweenness centrality as a case study to research efficient work stealing in a heterogeneous system environment. Betweenness centrality is an important algorithm in graph processing. It presents multiple-level parallelism and is an interesting problem to exploit various optimizations. We investigate queue-based work stealing to distribute its tasks across GPU compute units (CUs) and across the CPU and the GPU, which has not been done by prior work. In particular, we demonstrate how to leverage the new platform-atomic operations on AMD Accelerated Processing Units (APUs) to operate cross-device queues in a lock-free manner in shared virtual memory. To make the work stealing runtime and the application more efficient, we apply new architectural features, including atomic operations with different memory scopes and or-derings for different synchronization scenarios. We implement our solution using heterogeneous system architecture (HSA). Our results show that betweenness centrality with CPU-GPU work stealing achieves an average of 15% (up to 30%) performance improvement over GPU-only execution for diverse graph inputs. Our work stealing solution can be applied widely to other applications too. Finally, we analyze important parameters critical for queuing and stealing.","PeriodicalId":398898,"journal":{"name":"Proceedings of the Computing Frontiers Conference","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-05-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131968975","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
S. Fiore, Cosimo Palazzo, Alessandro D'Anca, D. Elia, E. Londero, C. Knapic, S. Monna, N. Marcucci, F. Aguilar, M. Plóciennik, J. M. D. Lucas, G. Aloisio
In the context of the EU H2020 INDIGO-DataCloud project several use case on large scale scientific data analysis regarding different research communities have been implemented. All of them require the availability of large amount of data related to either output of simulations or observed data from sensors and need scientific (big) data solutions to run data analysis experiments. More specifically, the paper presents the case studies related to the following research communities: (i) the European Multidisciplinary Seafloor and water column Observatory (INGV-EMSO), (ii) the Large Binocular Telescope, (iii) LifeWatch, and (iv) the European Network for Earth System Modelling (ENES).
{"title":"Big Data Analytics on Large-Scale Scientific Datasets in the INDIGO-DataCloud Project","authors":"S. Fiore, Cosimo Palazzo, Alessandro D'Anca, D. Elia, E. Londero, C. Knapic, S. Monna, N. Marcucci, F. Aguilar, M. Plóciennik, J. M. D. Lucas, G. Aloisio","doi":"10.1145/3075564.3078884","DOIUrl":"https://doi.org/10.1145/3075564.3078884","url":null,"abstract":"In the context of the EU H2020 INDIGO-DataCloud project several use case on large scale scientific data analysis regarding different research communities have been implemented. All of them require the availability of large amount of data related to either output of simulations or observed data from sensors and need scientific (big) data solutions to run data analysis experiments. More specifically, the paper presents the case studies related to the following research communities: (i) the European Multidisciplinary Seafloor and water column Observatory (INGV-EMSO), (ii) the Large Binocular Telescope, (iii) LifeWatch, and (iv) the European Network for Earth System Modelling (ENES).","PeriodicalId":398898,"journal":{"name":"Proceedings of the Computing Frontiers Conference","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-05-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132813437","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Dawen Xu, Yi Liao, Ying Wang, Huawei Li, Xiaowei Li
Processing-in-Memory (PIM) is returning as a promising solution to address the issue of memory wall as computing systems gradually step into the big data era. Researchers continually proposed various PIM architecture combined with novel memory device or 3D integration technology, but it is still a lack of universal task scheduling method in terms of the new heterogeneous platform. In this paper, we propose a formalized model to quantify the performance and energy of the PIM+CPU heterogeneous parallel system. In addition, we are the first to build a task partitioning and mapping framework to exploit different PIM engines. In this framework, an application is divided into subtasks and mapped onto appropriate execution units based on the proposed PIM-oriented Earliest-Finish-Time (PEFT) algorithm to maximize the performance gains brought by PIM. Experimental evaluations show our PIM-aware framework significantly improves the system performance compared to conventional processor architectures.
{"title":"Selective off-loading to Memory: Task Partitioning and Mapping for PIM-enabled Heterogeneous Systems","authors":"Dawen Xu, Yi Liao, Ying Wang, Huawei Li, Xiaowei Li","doi":"10.1145/3075564.3075584","DOIUrl":"https://doi.org/10.1145/3075564.3075584","url":null,"abstract":"Processing-in-Memory (PIM) is returning as a promising solution to address the issue of memory wall as computing systems gradually step into the big data era. Researchers continually proposed various PIM architecture combined with novel memory device or 3D integration technology, but it is still a lack of universal task scheduling method in terms of the new heterogeneous platform. In this paper, we propose a formalized model to quantify the performance and energy of the PIM+CPU heterogeneous parallel system. In addition, we are the first to build a task partitioning and mapping framework to exploit different PIM engines. In this framework, an application is divided into subtasks and mapped onto appropriate execution units based on the proposed PIM-oriented Earliest-Finish-Time (PEFT) algorithm to maximize the performance gains brought by PIM. Experimental evaluations show our PIM-aware framework significantly improves the system performance compared to conventional processor architectures.","PeriodicalId":398898,"journal":{"name":"Proceedings of the Computing Frontiers Conference","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-05-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125331195","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we present a dynamic compilation system called ExanaDBT for transparently optimizing and parallelizing binaries at runtime based on the polyhedral model. Starting from hot spot detection of the execution, ExanaDBT dynamically estimates gains for optimization, translates the target region into highly optimized code, and switches the execution of original code to optimized one. To realize advanced loop-level optimizations beyond trace- or instruction-level, ExanaDBT uses a polyhedral optimizer and performs loop transformation for rewarding sustainable performance gain on systems with deeper memory hierarchy. Especially for successful optimizations, we reveal that a simple conversion from the original binaries to LLVM IR will not enough for representing the code in polyhedral model, and then investigate a feasible way to lift binaries to the IR capable of polyhedral optimizations. We implement a proof-of-concept design of ExanaDBT and evaluate it. From the evaluation results, we confirm that ExanaDBT realizes dynamic optimization in a fully automated fashion. The results also show that ExanaDBT can contribute to speeding up the execution in average 3.2 times from unoptimized serial code in single thread execution and 11.9 times in 16 thread parallel execution.
{"title":"ExanaDBT: A Dynamic Compilation System for Transparent Polyhedral Optimizations at Runtime","authors":"Yukinori Sato, Tomoya Yuki, Toshio Endo","doi":"10.1145/3075564.3077627","DOIUrl":"https://doi.org/10.1145/3075564.3077627","url":null,"abstract":"In this paper, we present a dynamic compilation system called ExanaDBT for transparently optimizing and parallelizing binaries at runtime based on the polyhedral model. Starting from hot spot detection of the execution, ExanaDBT dynamically estimates gains for optimization, translates the target region into highly optimized code, and switches the execution of original code to optimized one. To realize advanced loop-level optimizations beyond trace- or instruction-level, ExanaDBT uses a polyhedral optimizer and performs loop transformation for rewarding sustainable performance gain on systems with deeper memory hierarchy. Especially for successful optimizations, we reveal that a simple conversion from the original binaries to LLVM IR will not enough for representing the code in polyhedral model, and then investigate a feasible way to lift binaries to the IR capable of polyhedral optimizations. We implement a proof-of-concept design of ExanaDBT and evaluate it. From the evaluation results, we confirm that ExanaDBT realizes dynamic optimization in a fully automated fashion. The results also show that ExanaDBT can contribute to speeding up the execution in average 3.2 times from unoptimized serial code in single thread execution and 11.9 times in 16 thread parallel execution.","PeriodicalId":398898,"journal":{"name":"Proceedings of the Computing Frontiers Conference","volume":"68 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-05-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129231547","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jooyoung Lee, Konstantin Lopatin, Rasheed Hussain, Waqas Nawaz
Understanding the evolution of relationship among users, through generic interactions, is the key driving force to this study. We model the evolution of friendship in the social network of MobiClique using observations of interactions among users. MobiClique is a mobile ad-hoc network setting where Bluetooth enabled mobile devices communicate directly with each other as they meet opportunistically. We first apply existing topological methods to predict future friendship in MobiClique and then compare the results with the proposed interaction-based method. Our approach combines four types of user activity information to measure the similarity between users at any specific time. We also define the temporal accuracy evaluation metric and show that interaction data with temporal information is a good indicator to predict temporal social ties. The experimental evaluation suggests that the well-known static topological metrics do not perform well in ad-hoc network scenario. The results suggest that to accurately predict evolution of friendship, or topology of the network, it is necessary to utilise some interaction information.
{"title":"Evolution of Friendship: a case study of MobiClique","authors":"Jooyoung Lee, Konstantin Lopatin, Rasheed Hussain, Waqas Nawaz","doi":"10.1145/3075564.3075595","DOIUrl":"https://doi.org/10.1145/3075564.3075595","url":null,"abstract":"Understanding the evolution of relationship among users, through generic interactions, is the key driving force to this study. We model the evolution of friendship in the social network of MobiClique using observations of interactions among users. MobiClique is a mobile ad-hoc network setting where Bluetooth enabled mobile devices communicate directly with each other as they meet opportunistically. We first apply existing topological methods to predict future friendship in MobiClique and then compare the results with the proposed interaction-based method. Our approach combines four types of user activity information to measure the similarity between users at any specific time. We also define the temporal accuracy evaluation metric and show that interaction data with temporal information is a good indicator to predict temporal social ties. The experimental evaluation suggests that the well-known static topological metrics do not perform well in ad-hoc network scenario. The results suggest that to accurately predict evolution of friendship, or topology of the network, it is necessary to utilise some interaction information.","PeriodicalId":398898,"journal":{"name":"Proceedings of the Computing Frontiers Conference","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-05-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129462065","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}