首页 > 最新文献

Proceedings of the 9th ACM Multimedia Systems Conference最新文献

英文 中文
Comprehensible reasoning and automated reporting of medical examinations based on deep learning analysis 基于深度学习分析的可理解推理和自动体检报告
Pub Date : 2018-06-12 DOI: 10.1145/3204949.3208113
S. Hicks, Konstantin Pogorelov, T. Lange, M. Lux, Mattis Jeppsson, K. Randel, S. Eskeland, P. Halvorsen, M. Riegler
In the future, medical doctors will to an increasing degree be assisted by deep learning neural networks for disease detection during examinations of patients. In order to make qualified decisions, the black box of deep learning must be opened to increase the understanding of the reasoning behind the decision of the machine learning system. Furthermore, preparing reports after the examinations is a significant part of a doctors work-day, but if we already have a system dissecting the neural network for understanding, the same tool can be used for automatic report generation. In this demo, we describe a system that analyses medical videos from the gastrointestinal tract. Our system dissects the Tensorflow-based neural network to provide insights into the analysis and uses the resulting classification and rationale behind the classification to automatically generate an examination report for the patient's medical journal.
在未来,医生将越来越多地借助深度学习神经网络在检查患者时进行疾病检测。为了做出合格的决策,必须打开深度学习的黑匣子,以增加对机器学习系统决策背后推理的理解。此外,在检查后准备报告是医生工作的重要组成部分,但如果我们已经有了一个系统来剖析神经网络以进行理解,那么同样的工具可以用于自动生成报告。在这个演示中,我们描述了一个分析胃肠道医学视频的系统。我们的系统剖析了基于tensorflow的神经网络,以提供对分析的见解,并使用分类结果和分类背后的原理自动为患者的医学杂志生成检查报告。
{"title":"Comprehensible reasoning and automated reporting of medical examinations based on deep learning analysis","authors":"S. Hicks, Konstantin Pogorelov, T. Lange, M. Lux, Mattis Jeppsson, K. Randel, S. Eskeland, P. Halvorsen, M. Riegler","doi":"10.1145/3204949.3208113","DOIUrl":"https://doi.org/10.1145/3204949.3208113","url":null,"abstract":"In the future, medical doctors will to an increasing degree be assisted by deep learning neural networks for disease detection during examinations of patients. In order to make qualified decisions, the black box of deep learning must be opened to increase the understanding of the reasoning behind the decision of the machine learning system. Furthermore, preparing reports after the examinations is a significant part of a doctors work-day, but if we already have a system dissecting the neural network for understanding, the same tool can be used for automatic report generation. In this demo, we describe a system that analyses medical videos from the gastrointestinal tract. Our system dissects the Tensorflow-based neural network to provide insights into the analysis and uses the resulting classification and rationale behind the classification to automatically generate an examination report for the patient's medical journal.","PeriodicalId":141196,"journal":{"name":"Proceedings of the 9th ACM Multimedia Systems Conference","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128245945","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Skeleton-based continuous extrinsic calibration of multiple RGB-D kinect cameras 基于骨骼的多个RGB-D kinect摄像头的连续外部校准
Pub Date : 2018-06-12 DOI: 10.1145/3204949.3204969
Kevin Desai, B. Prabhakaran, S. Raghuraman
Applications involving 3D scanning and reconstruction & 3D Tele-immersion provide an immersive experience by capturing a scene using multiple RGB-D cameras, such as Kinect. Prior knowledge of intrinsic calibration of each of the cameras, and extrinsic calibration between cameras, is essential to reconstruct the captured data. The intrinsic calibration for a given camera rarely ever changes, so only needs to be estimated once. However, the extrinsic calibration between cameras can change, even with a small nudge to the camera. Calibration accuracy depends on sensor noise, features used, sampling method, etc., resulting in the need for iterative calibration to achieve good calibration. In this paper, we introduce a skeleton based approach to calibrate multiple RGB-D Kinect cameras in a closed setup, automatically without any intervention, within a few seconds. The method uses only the person present in the scene to calibrate, removing the need for manually inserting, detecting and extracting other objects like plane, checker-board, sphere, etc. 3D joints of the extracted skeleton are used as correspondence points between cameras, after undergoing accuracy and orientation checks. Temporal, spatial, and motion constraints are applied during the point selection strategy. Our calibration error checking is inexpensive in terms of computational cost and time and hence is continuously run in the background. Automatic re-calibration of the cameras can be performed when the calibration error goes beyond a threshold due to any possible camera movement. Evaluations show that the method can provide fast, accurate and continuous calibration, as long as a human is moving around in the captured scene.
涉及3D扫描和重建和3D远程沉浸的应用程序通过使用多个RGB-D相机(如Kinect)捕捉场景,提供身临其境的体验。对于重建捕获的数据,每个摄像机的固有校准和摄像机之间的外部校准的先验知识是必不可少的。给定相机的固有校准很少改变,所以只需要估计一次。然而,相机之间的外部校准可能会改变,即使对相机进行轻微的推动。校准精度取决于传感器噪声、使用的特征、采样方法等,导致需要迭代校准才能实现良好的校准。在本文中,我们介绍了一种基于骨架的方法来在封闭设置中自动校准多个RGB-D Kinect摄像机,无需任何干预,只需几秒钟。该方法仅使用场景中存在的人进行校准,无需手动插入,检测和提取平面,棋盘,球体等其他物体。提取的骨架的3D关节作为相机之间的对应点,经过精度和方向检查。在点选择策略中应用时间、空间和运动约束。我们的校准误差检查在计算成本和时间方面是廉价的,因此在后台连续运行。当由于任何可能的相机移动而导致校准误差超过阈值时,可以执行相机的自动重新校准。评估表明,该方法可以提供快速、准确和连续的校准,只要一个人在捕获的场景中走动。
{"title":"Skeleton-based continuous extrinsic calibration of multiple RGB-D kinect cameras","authors":"Kevin Desai, B. Prabhakaran, S. Raghuraman","doi":"10.1145/3204949.3204969","DOIUrl":"https://doi.org/10.1145/3204949.3204969","url":null,"abstract":"Applications involving 3D scanning and reconstruction & 3D Tele-immersion provide an immersive experience by capturing a scene using multiple RGB-D cameras, such as Kinect. Prior knowledge of intrinsic calibration of each of the cameras, and extrinsic calibration between cameras, is essential to reconstruct the captured data. The intrinsic calibration for a given camera rarely ever changes, so only needs to be estimated once. However, the extrinsic calibration between cameras can change, even with a small nudge to the camera. Calibration accuracy depends on sensor noise, features used, sampling method, etc., resulting in the need for iterative calibration to achieve good calibration. In this paper, we introduce a skeleton based approach to calibrate multiple RGB-D Kinect cameras in a closed setup, automatically without any intervention, within a few seconds. The method uses only the person present in the scene to calibrate, removing the need for manually inserting, detecting and extracting other objects like plane, checker-board, sphere, etc. 3D joints of the extracted skeleton are used as correspondence points between cameras, after undergoing accuracy and orientation checks. Temporal, spatial, and motion constraints are applied during the point selection strategy. Our calibration error checking is inexpensive in terms of computational cost and time and hence is continuously run in the background. Automatic re-calibration of the cameras can be performed when the calibration error goes beyond a threshold due to any possible camera movement. Evaluations show that the method can provide fast, accurate and continuous calibration, as long as a human is moving around in the captured scene.","PeriodicalId":141196,"journal":{"name":"Proceedings of the 9th ACM Multimedia Systems Conference","volume":"354 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131748786","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Depresjon
Pub Date : 2018-06-12 DOI: 10.1145/3204949.3208125
Enrique Garcia-Ceja, M. Riegler, P. Jakobsen, J. Tørresen, T. Nordgreen, K. Oedegaard, Ole Bernt Fasmer
∑ depresjonen i høy grad kan forstås ved å se på sammenhengen den oppstår i; personens reaksjon på omgivelsene er viktig for utvikling og opprettholdelse av depresjon ∑ uhensiktsmessige forsøk på mestring, i form av unngåelse, inaktivitet eller grubling, bidrar til å opprettholde depresjon ∑ tanker er viktige, men en legger spesiell vekt på sammenhengen som tankene oppstår i og hvilken funksjon de har, forløpere og konsekvenser, mer enn på tankenes innhold
{"title":"Depresjon","authors":"Enrique Garcia-Ceja, M. Riegler, P. Jakobsen, J. Tørresen, T. Nordgreen, K. Oedegaard, Ole Bernt Fasmer","doi":"10.1145/3204949.3208125","DOIUrl":"https://doi.org/10.1145/3204949.3208125","url":null,"abstract":"∑ depresjonen i høy grad kan forstås ved å se på sammenhengen den oppstår i; personens reaksjon på omgivelsene er viktig for utvikling og opprettholdelse av depresjon ∑ uhensiktsmessige forsøk på mestring, i form av unngåelse, inaktivitet eller grubling, bidrar til å opprettholde depresjon ∑ tanker er viktige, men en legger spesiell vekt på sammenhengen som tankene oppstår i og hvilken funksjon de har, forløpere og konsekvenser, mer enn på tankenes innhold","PeriodicalId":141196,"journal":{"name":"Proceedings of the 9th ACM Multimedia Systems Conference","volume":"106 3","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120835360","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 61
HINDSIGHT 后见之明
Pub Date : 2018-06-12 DOI: 10.1145/3204949.3208131
Konstantinos Kousias, M. Riegler, Ö. Alay, A. Argyriou
Hyperparameter optimization is an important but often ignored part of successfully training Neural Networks (NN) since it is time consuming and rather complex. In this paper, we present HINDSIGHT, an open-source framework for designing and implementing NN that supports hyperparameter optimization. HINDSIGHT is built entirely in R and the current version focuses on Long Short Term Memory (LSTM) networks, a special kind of Recurrent Neural Networks (RNN). HINDSIGHT is designed in a way that it can easily be expanded to other types of Deep Learning (DL) algorithms such as Convolutional Neural Networks (CNN) or feed-forward Deep Neural Networks (DNN). The main goal of HINDSIGHT is to provide a simple and quick interface to get started with LSTM networks and hyperparameter optimization.
{"title":"HINDSIGHT","authors":"Konstantinos Kousias, M. Riegler, Ö. Alay, A. Argyriou","doi":"10.1145/3204949.3208131","DOIUrl":"https://doi.org/10.1145/3204949.3208131","url":null,"abstract":"Hyperparameter optimization is an important but often ignored part of successfully training Neural Networks (NN) since it is time consuming and rather complex. In this paper, we present HINDSIGHT, an open-source framework for designing and implementing NN that supports hyperparameter optimization. HINDSIGHT is built entirely in R and the current version focuses on Long Short Term Memory (LSTM) networks, a special kind of Recurrent Neural Networks (RNN). HINDSIGHT is designed in a way that it can easily be expanded to other types of Deep Learning (DL) algorithms such as Convolutional Neural Networks (CNN) or feed-forward Deep Neural Networks (DNN). The main goal of HINDSIGHT is to provide a simple and quick interface to get started with LSTM networks and hyperparameter optimization.","PeriodicalId":141196,"journal":{"name":"Proceedings of the 9th ACM Multimedia Systems Conference","volume":"468 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123281970","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Multi-profile ultra high definition (UHD) AVC and HEVC 4K DASH datasets 多轮廓超高清(UHD) AVC和HEVC 4K DASH数据集
Pub Date : 2018-06-12 DOI: 10.1145/3204949.3208130
Jason J. Quinlan, C. Sreenan
In this paper we present a Multi-Profile Ultra High Definition (UHD) DASH dataset composed of both AVC (H.264) and HEVC (H.265) video content, generated from three well known open-source 4K video clips. The representation rates and resolutions of our dataset range from 40Mbps in 4K down to 235kbps in 320x240, and are comparable to rates utilised by on demand services such as Netflix, Youtube and Amazon Prime. We provide our dataset for both realtime testbed evaluation and trace-based simulation. The real-time testbed content provides a means of evaluating DASH adaptation techniques on physical hardware, while our trace-based content offers simulation over frameworks such as ns-2 and ns-3. We also provide the original pre-DASH MP4 files and our associated DASH generation scripts, so as to provide researchers with a mechanism to create their own DASH profile content locally. Which improves the reproducibility of results and remove re-buffering issues caused by delay/jitter/losses in the Internet. The primary goal of our dataset is to provide the wide range of video content required for validating DASH Quality of Experience (QoE) delivery over networks, ranging from constrained cellular and satellite systems to future high speed architectures such as the proposed 5G mmwave technology.
在本文中,我们提出了一个由AVC (H.264)和HEVC (H.265)视频内容组成的多轮廓超高清(UHD) DASH数据集,该数据集由三个众所周知的开源4K视频片段生成。我们数据集的表示率和分辨率范围从4K的40Mbps到320x240的235kbps,并且与Netflix, Youtube和Amazon Prime等点播服务使用的速率相当。我们为实时测试平台评估和基于跟踪的模拟提供了我们的数据集。实时测试平台内容提供了一种评估物理硬件上DASH适应技术的方法,而我们基于跟踪的内容提供了在ns-2和ns-3等框架上的模拟。我们还提供了原始的pre-DASH MP4文件和相关的DASH生成脚本,为研究人员提供了一种本地创建自己的DASH配置文件内容的机制。这提高了结果的再现性,并消除了由互联网上的延迟/抖动/损失引起的重新缓冲问题。我们数据集的主要目标是提供验证网络上DASH体验质量(QoE)交付所需的广泛视频内容,范围从受限的蜂窝和卫星系统到未来的高速架构,如拟议的5G毫米波技术。
{"title":"Multi-profile ultra high definition (UHD) AVC and HEVC 4K DASH datasets","authors":"Jason J. Quinlan, C. Sreenan","doi":"10.1145/3204949.3208130","DOIUrl":"https://doi.org/10.1145/3204949.3208130","url":null,"abstract":"In this paper we present a Multi-Profile Ultra High Definition (UHD) DASH dataset composed of both AVC (H.264) and HEVC (H.265) video content, generated from three well known open-source 4K video clips. The representation rates and resolutions of our dataset range from 40Mbps in 4K down to 235kbps in 320x240, and are comparable to rates utilised by on demand services such as Netflix, Youtube and Amazon Prime. We provide our dataset for both realtime testbed evaluation and trace-based simulation. The real-time testbed content provides a means of evaluating DASH adaptation techniques on physical hardware, while our trace-based content offers simulation over frameworks such as ns-2 and ns-3. We also provide the original pre-DASH MP4 files and our associated DASH generation scripts, so as to provide researchers with a mechanism to create their own DASH profile content locally. Which improves the reproducibility of results and remove re-buffering issues caused by delay/jitter/losses in the Internet. The primary goal of our dataset is to provide the wide range of video content required for validating DASH Quality of Experience (QoE) delivery over networks, ranging from constrained cellular and satellite systems to future high speed architectures such as the proposed 5G mmwave technology.","PeriodicalId":141196,"journal":{"name":"Proceedings of the 9th ACM Multimedia Systems Conference","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129647119","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 33
Sensorclone Sensorclone
Pub Date : 2018-06-12 DOI: 10.1145/3204949.3204952
Huber Flores, Pan Hui, S. Tarkoma, Yong Li, T. Anagnostopoulos, V. Kostakos, Chu Luo, Xiang Su
IoT services hosted by low-power devices rely on the cloud infrastructure to propagate their ubiquitous presence over the Internet. A critical challenge for IoT systems is to ensure continuous provisioning of IoT services by overcoming network breakdowns, hardware failures, and energy constraints. To overcome these issues, we propose a cloud-based framework namely SensorClone, which relies on virtual devices to improve IoT resilience. A virtual device is the digital counterpart of a physical device that has learned to emulate its operations from sample data collected from the physical one. SensorClone exploits the collected data of low-power devices to create virtual devices in the cloud. SensorClone then can opportunistically migrate virtual devices from the cloud into other devices, potentially underutilized, with higher capabilities and closer to the edge of the network, e.g., smart devices. Through a real deployment of our SensorClone in the wild, we identify that virtual devices can be used for two purposes, 1) to reduce the energy consumption of physical devices by duty cycling their service provisioning between the physical device and the virtual representation hosted in the cloud, and 2) to scale IoT services at the edge of the network by harnessing temporal periods of underutilization of smart devices. To evaluate our framework, we present a use case of a virtual sensor created from an IoT service of temperature. From our results, we verify that it is possible to achieve unlimited availability up to 90% and substantial power efficiency under acceptable levels of quality of service. Our work makes contributions towards improving IoT scalability and resilience by using virtual devices.
{"title":"Sensorclone","authors":"Huber Flores, Pan Hui, S. Tarkoma, Yong Li, T. Anagnostopoulos, V. Kostakos, Chu Luo, Xiang Su","doi":"10.1145/3204949.3204952","DOIUrl":"https://doi.org/10.1145/3204949.3204952","url":null,"abstract":"IoT services hosted by low-power devices rely on the cloud infrastructure to propagate their ubiquitous presence over the Internet. A critical challenge for IoT systems is to ensure continuous provisioning of IoT services by overcoming network breakdowns, hardware failures, and energy constraints. To overcome these issues, we propose a cloud-based framework namely SensorClone, which relies on virtual devices to improve IoT resilience. A virtual device is the digital counterpart of a physical device that has learned to emulate its operations from sample data collected from the physical one. SensorClone exploits the collected data of low-power devices to create virtual devices in the cloud. SensorClone then can opportunistically migrate virtual devices from the cloud into other devices, potentially underutilized, with higher capabilities and closer to the edge of the network, e.g., smart devices. Through a real deployment of our SensorClone in the wild, we identify that virtual devices can be used for two purposes, 1) to reduce the energy consumption of physical devices by duty cycling their service provisioning between the physical device and the virtual representation hosted in the cloud, and 2) to scale IoT services at the edge of the network by harnessing temporal periods of underutilization of smart devices. To evaluate our framework, we present a use case of a virtual sensor created from an IoT service of temperature. From our results, we verify that it is possible to achieve unlimited availability up to 90% and substantial power efficiency under acceptable levels of quality of service. Our work makes contributions towards improving IoT scalability and resilience by using virtual devices.","PeriodicalId":141196,"journal":{"name":"Proceedings of the 9th ACM Multimedia Systems Conference","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116472913","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Automated profiling of virtualized media processing functions using telemetry and machine learning 使用遥测和机器学习自动分析虚拟媒体处理功能
Pub Date : 2018-06-12 DOI: 10.1145/3204949.3204976
R. Mekuria, M. Mcgrath, Vincenzo Riccobene, Victor Bayon-Molino, C. Tselios, John Thomson, Artem Dobrodub
Most media streaming services are composed by different virtualized processing functions such as encoding, packaging, encryption, content stitching etc. Deployment of these functions in the cloud is attractive as it enables flexibility in deployment options and resource allocation for the different functions. Yet, most of the time overprovisioning of cloud resources is necessary in order to meet demand variability. This can be costly, especially for large scale deployments. Prior art proposes resource allocation based on analytical models that minimize the costs of cloud deployments under a quality of service (QoS) constraint. However, these models do not sufficiently capture the underlying complexity of services composed of multiple processing functions. Instead, we introduce a novel methodology based on full-stack telemetry and machine learning to profile virtualized or cloud native media processing functions individually. The basis of the approach consists of investigating 4 categories of performance metrics: throughput, anomaly, latency and entropy (TALE) in offline (stress tests) and online setups using cloud telemetry. Machine learning is then used to profile the media processing function in the targeted cloud/NFV environment and to extract the most relevant cloud level Key Performance Indicators (KPIs) that relate to the final perceived quality and known client side performance indicators. The results enable more efficient monitoring, as only KPI related metrics need to be collected, stored and analyzed, reducing the storage and communication footprints by over 85%. In addition a detailed overview of the functions behavior was obtained, enabling optimized initial configuration and deployment, and more fine-grained dynamic online resource allocation reducing overprovisioning and avoiding function collapse. We further highlight the next steps towards cloud native carrier grade virtualized processing functions relevant for future network architectures such as in emerging 5G architectures.
大多数流媒体服务都是由不同的虚拟化处理功能组成的,如编码、打包、加密、内容拼接等。在云中部署这些功能很有吸引力,因为它为不同的功能提供了部署选项和资源分配的灵活性。然而,大多数时候,为了满足需求的可变性,云资源的过度供应是必要的。这可能代价高昂,特别是对于大规模部署而言。现有技术提出了基于分析模型的资源分配,该模型在服务质量(QoS)约束下最小化云部署的成本。然而,这些模型不能充分捕捉由多个处理功能组成的服务的底层复杂性。相反,我们引入了一种基于全栈遥测和机器学习的新方法来单独分析虚拟化或云原生媒体处理功能。该方法的基础包括调查4类性能指标:离线(压力测试)和使用云遥测的在线设置中的吞吐量、异常、延迟和熵(TALE)。然后使用机器学习来分析目标云/NFV环境中的媒体处理功能,并提取与最终感知质量和已知客户端性能指标相关的最相关的云级关键性能指标(kpi)。结果支持更有效的监控,因为只需要收集、存储和分析与KPI相关的指标,从而将存储和通信占用的空间减少了85%以上。此外,还获得了功能行为的详细概述,从而实现了优化的初始配置和部署,以及更细粒度的动态在线资源分配,从而减少了过度供应并避免了功能崩溃。我们进一步强调了与未来网络架构(如新兴5G架构)相关的云原生运营商级虚拟化处理功能的下一步工作。
{"title":"Automated profiling of virtualized media processing functions using telemetry and machine learning","authors":"R. Mekuria, M. Mcgrath, Vincenzo Riccobene, Victor Bayon-Molino, C. Tselios, John Thomson, Artem Dobrodub","doi":"10.1145/3204949.3204976","DOIUrl":"https://doi.org/10.1145/3204949.3204976","url":null,"abstract":"Most media streaming services are composed by different virtualized processing functions such as encoding, packaging, encryption, content stitching etc. Deployment of these functions in the cloud is attractive as it enables flexibility in deployment options and resource allocation for the different functions. Yet, most of the time overprovisioning of cloud resources is necessary in order to meet demand variability. This can be costly, especially for large scale deployments. Prior art proposes resource allocation based on analytical models that minimize the costs of cloud deployments under a quality of service (QoS) constraint. However, these models do not sufficiently capture the underlying complexity of services composed of multiple processing functions. Instead, we introduce a novel methodology based on full-stack telemetry and machine learning to profile virtualized or cloud native media processing functions individually. The basis of the approach consists of investigating 4 categories of performance metrics: throughput, anomaly, latency and entropy (TALE) in offline (stress tests) and online setups using cloud telemetry. Machine learning is then used to profile the media processing function in the targeted cloud/NFV environment and to extract the most relevant cloud level Key Performance Indicators (KPIs) that relate to the final perceived quality and known client side performance indicators. The results enable more efficient monitoring, as only KPI related metrics need to be collected, stored and analyzed, reducing the storage and communication footprints by over 85%. In addition a detailed overview of the functions behavior was obtained, enabling optimized initial configuration and deployment, and more fine-grained dynamic online resource allocation reducing overprovisioning and avoiding function collapse. We further highlight the next steps towards cloud native carrier grade virtualized processing functions relevant for future network architectures such as in emerging 5G architectures.","PeriodicalId":141196,"journal":{"name":"Proceedings of the 9th ACM Multimedia Systems Conference","volume":"62 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122403005","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
A QoE assessment method based on EDA, heart rate and EEG of a virtual reality assistive technology system 基于EDA、心率和脑电图的虚拟现实辅助技术系统QoE评估方法
Pub Date : 2018-06-12 DOI: 10.1145/3204949.3208118
Débora Pereira Salgado, F. Martins, T. B. Rodrigues, Conor Keighrey, R. Flynn, E. Naves, Niall Murray
The1 key aim of various assistive technology (AT) systems is to augment an individual's functioning whilst supporting an enhanced quality of life (QoL). In recent times, we have seen the emergence of Virtual Reality (VR) based assistive technology systems made possible by the availability of commercially available Head Mounted Displays (HMDs). The use of VR for AT aims to support levels of interaction and immersion not previously possibly with more traditional AT solutions. Crucial to the success of these technologies is understanding, from the user perspective, the influencing factors that affect the user Quality of Experience (QoE). In addition to the typical QoE metrics, other factors to consider are human behavior like mental and emotional state, posture and gestures. In terms of trying to objectively quantify such factors, there are wide ranges of wearable sensors that are able to monitor physiological signals and provide reliable data. In this demo, we will capture and present the users EEG, heart Rate, EDA and head motion during the use of AT VR application. The prototype is composed of the sensor and presentation systems: for acquisition of biological signals constituted by wearable sensors and the virtual wheelchair simulator that interfaces to a typical LCD display.
各种辅助技术(AT)系统的主要目的是在支持提高生活质量的同时增强个人的功能。最近,我们看到了基于虚拟现实(VR)的辅助技术系统的出现,这得益于商用头戴式显示器(hmd)的可用性。在AT中使用VR的目的是支持以前更传统的AT解决方案无法实现的互动和沉浸感。这些技术成功的关键是从用户的角度理解影响用户体验质量(QoE)的影响因素。除了典型的QoE指标,其他需要考虑的因素还有人类行为,如精神和情绪状态、姿势和手势。在试图客观量化这些因素方面,有很多可穿戴传感器能够监测生理信号并提供可靠的数据。在这个演示中,我们将捕捉并呈现用户在使用AT VR应用程序期间的脑电图、心率、EDA和头部运动。该原型由传感器和呈现系统组成:用于采集生物信号的可穿戴传感器和虚拟轮椅模拟器,该系统与典型的LCD显示器接口。
{"title":"A QoE assessment method based on EDA, heart rate and EEG of a virtual reality assistive technology system","authors":"Débora Pereira Salgado, F. Martins, T. B. Rodrigues, Conor Keighrey, R. Flynn, E. Naves, Niall Murray","doi":"10.1145/3204949.3208118","DOIUrl":"https://doi.org/10.1145/3204949.3208118","url":null,"abstract":"The1 key aim of various assistive technology (AT) systems is to augment an individual's functioning whilst supporting an enhanced quality of life (QoL). In recent times, we have seen the emergence of Virtual Reality (VR) based assistive technology systems made possible by the availability of commercially available Head Mounted Displays (HMDs). The use of VR for AT aims to support levels of interaction and immersion not previously possibly with more traditional AT solutions. Crucial to the success of these technologies is understanding, from the user perspective, the influencing factors that affect the user Quality of Experience (QoE). In addition to the typical QoE metrics, other factors to consider are human behavior like mental and emotional state, posture and gestures. In terms of trying to objectively quantify such factors, there are wide ranges of wearable sensors that are able to monitor physiological signals and provide reliable data. In this demo, we will capture and present the users EEG, heart Rate, EDA and head motion during the use of AT VR application. The prototype is composed of the sensor and presentation systems: for acquisition of biological signals constituted by wearable sensors and the virtual wheelchair simulator that interfaces to a typical LCD display.","PeriodicalId":141196,"journal":{"name":"Proceedings of the 9th ACM Multimedia Systems Conference","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126644906","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 27
Valid.IoT: a framework for sensor data quality analysis and interpolation 有效的。IoT:用于传感器数据质量分析和插值的框架
Pub Date : 2018-06-12 DOI: 10.1145/3204949.3204972
Daniel Kümper, Thorben Iggena, R. Tönjes, E. Pulvermüller
Heterogeneous sensor device networks with diverse maintainers and information collected via social media as well as crowdsourcing tend to be elements of uncertainty in IoT and Smart City networks. Often, there is no ground truth available that can be used to check the plausibility and concordance of the new information. This paper proposes the Valid.IoT Framework as an attachable IoT framework component that can be linked to generate QoI vectors and Interpolated sensory data with plausibility and quality estimations to a variety of platforms. The framework utilises extended infrastructure knowledge and infrastructure-aware interpolation algorithms to validate crowdsourced and device generated sensor information through sensor fusion.
具有不同维护人员的异构传感器设备网络以及通过社交媒体和众包收集的信息往往是物联网和智慧城市网络中的不确定因素。通常,没有可以用来检查新信息的合理性和一致性的基本事实。本文提出了有效的。物联网框架作为一个附加的物联网框架组件,可以链接到生成qi向量和插值的感官数据,并对各种平台进行可行性和质量估计。该框架利用扩展的基础设施知识和感知基础设施的插值算法,通过传感器融合验证众包和设备生成的传感器信息。
{"title":"Valid.IoT: a framework for sensor data quality analysis and interpolation","authors":"Daniel Kümper, Thorben Iggena, R. Tönjes, E. Pulvermüller","doi":"10.1145/3204949.3204972","DOIUrl":"https://doi.org/10.1145/3204949.3204972","url":null,"abstract":"Heterogeneous sensor device networks with diverse maintainers and information collected via social media as well as crowdsourcing tend to be elements of uncertainty in IoT and Smart City networks. Often, there is no ground truth available that can be used to check the plausibility and concordance of the new information. This paper proposes the Valid.IoT Framework as an attachable IoT framework component that can be linked to generate QoI vectors and Interpolated sensory data with plausibility and quality estimations to a variety of platforms. The framework utilises extended infrastructure knowledge and infrastructure-aware interpolation algorithms to validate crowdsourced and device generated sensor information through sensor fusion.","PeriodicalId":141196,"journal":{"name":"Proceedings of the 9th ACM Multimedia Systems Conference","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129281158","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 27
Film editing: new levers to improve VR streaming 电影剪辑:改进VR流媒体的新杠杆
Pub Date : 2018-06-12 DOI: 10.1145/3204949.3204962
Savino Dambra, Giuseppe Samela, L. Sassatelli, R. Pighetti, R. Aparicio-Pardo, A. Pinna-Dery
Streaming Virtual Reality (VR), even under the mere form of 360° videos, is much more complex than for regular videos because to lower the required rates, the transmission decisions must take the user's head position into account. The way the user exploits her/his freedom is therefore crucial for the network load. In turn, the way the user moves depends on the video content itself. VR is however a whole new medium, for which the film-making language does not exist yet, its "grammar" only being invented. We present a strongly inter-disciplinary approach to improve the streaming of 360° videos: designing high-level content manipulations (film editing) to limit and even control the user's motion in order to consume less bandwidth while maintaining the user's experience. We build an MPEG DASH-SRD player for Android and the Samsung Gear VR, featuring FoV-based quality decision and a replacement strategy to allow the tiles' buffers to build up while keeping their state up-to-date with the current FoV as much as bandwidth allows. The editing strategies we design have been integrated within the player, and the streaming module has been extended to benefit from the editing. Two sets of user experiments enabled to show that editing indeed impacts head velocity (reduction of up to 30%), consumed bandwidth (reduction of up to 25%) and subjective assessment. User's attention driving tools from other communities can hence be designed in order to improve streaming. We believe this innovative work opens up the path to a whole new field of possibilities in defining degrees of freedom to be wielded for VR streaming optimization.
流媒体虚拟现实(VR),即使只是360°视频的形式,也比普通视频复杂得多,因为为了降低所需的速率,传输决策必须考虑到用户的头部位置。因此,用户利用其自由的方式对网络负载至关重要。反过来,用户的移动方式取决于视频内容本身。然而,虚拟现实是一种全新的媒介,它的电影制作语言还不存在,它的“语法”只是被发明出来。我们提出了一种强有力的跨学科方法来改善360°视频的流:设计高级内容操作(电影编辑)来限制甚至控制用户的运动,以便在保持用户体验的同时消耗更少的带宽。我们为Android和三星Gear VR构建了一个MPEG DASH-SRD播放器,具有基于FoV的质量决策和替换策略,允许贴图的缓冲区建立,同时在带宽允许的情况下保持当前FoV的最新状态。我们设计的编辑策略已经集成到播放器中,并且流媒体模块已经扩展到从编辑中受益。两组用户实验表明,编辑确实会影响头部速度(减少多达30%)、消耗的带宽(减少多达25%)和主观评估。因此,可以设计来自其他社区的用户注意力驱动工具,以改善流媒体。我们相信这项创新的工作为定义VR流媒体优化所使用的自由度开辟了一条全新的可能性领域。
{"title":"Film editing: new levers to improve VR streaming","authors":"Savino Dambra, Giuseppe Samela, L. Sassatelli, R. Pighetti, R. Aparicio-Pardo, A. Pinna-Dery","doi":"10.1145/3204949.3204962","DOIUrl":"https://doi.org/10.1145/3204949.3204962","url":null,"abstract":"Streaming Virtual Reality (VR), even under the mere form of 360° videos, is much more complex than for regular videos because to lower the required rates, the transmission decisions must take the user's head position into account. The way the user exploits her/his freedom is therefore crucial for the network load. In turn, the way the user moves depends on the video content itself. VR is however a whole new medium, for which the film-making language does not exist yet, its \"grammar\" only being invented. We present a strongly inter-disciplinary approach to improve the streaming of 360° videos: designing high-level content manipulations (film editing) to limit and even control the user's motion in order to consume less bandwidth while maintaining the user's experience. We build an MPEG DASH-SRD player for Android and the Samsung Gear VR, featuring FoV-based quality decision and a replacement strategy to allow the tiles' buffers to build up while keeping their state up-to-date with the current FoV as much as bandwidth allows. The editing strategies we design have been integrated within the player, and the streaming module has been extended to benefit from the editing. Two sets of user experiments enabled to show that editing indeed impacts head velocity (reduction of up to 30%), consumed bandwidth (reduction of up to 25%) and subjective assessment. User's attention driving tools from other communities can hence be designed in order to improve streaming. We believe this innovative work opens up the path to a whole new field of possibilities in defining degrees of freedom to be wielded for VR streaming optimization.","PeriodicalId":141196,"journal":{"name":"Proceedings of the 9th ACM Multimedia Systems Conference","volume":"108 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116271361","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 25
期刊
Proceedings of the 9th ACM Multimedia Systems Conference
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1