While Virtual Reality applications are increasingly attracting the attention of developers and business analysts, the behaviour of users watching 360-degree (i.e. omnidirectional) videos has not been thoroughly studied yet. This paper introduces a dataset of head movements of users watching 360-degree videos on a Head-Mounted Display (HMD). The dataset includes data collected from 59 users watching five 70 s-long 360-degree videos on the Razer OSVR HDK2 HMD. The selected videos span a wide range of 360-degree content for which different viewer's involvement, thus navigation patterns, could be expected. We describe the open-source software developed to produce the dataset and present the test material and viewing conditions considered during the data acquisition. Finally, we show some examples of statistics that can be extracted from the collected data, for a content-dependent analysis of users' navigation patterns. The source code of the software used to collect the data has been made publicly available, together with the entire dataset, to enable the community to extend the dataset.
{"title":"360-Degree Video Head Movement Dataset","authors":"Xavier Corbillon, F. D. Simone, G. Simon","doi":"10.1145/3083187.3083215","DOIUrl":"https://doi.org/10.1145/3083187.3083215","url":null,"abstract":"While Virtual Reality applications are increasingly attracting the attention of developers and business analysts, the behaviour of users watching 360-degree (i.e. omnidirectional) videos has not been thoroughly studied yet. This paper introduces a dataset of head movements of users watching 360-degree videos on a Head-Mounted Display (HMD). The dataset includes data collected from 59 users watching five 70 s-long 360-degree videos on the Razer OSVR HDK2 HMD. The selected videos span a wide range of 360-degree content for which different viewer's involvement, thus navigation patterns, could be expected. We describe the open-source software developed to produce the dataset and present the test material and viewing conditions considered during the data acquisition. Finally, we show some examples of statistics that can be extracted from the collected data, for a content-dependent analysis of users' navigation patterns. The source code of the software used to collect the data has been made publicly available, together with the entire dataset, to enable the community to extend the dataset.","PeriodicalId":123321,"journal":{"name":"Proceedings of the 8th ACM on Multimedia Systems Conference","volume":"558 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116240188","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. Zahran, Jason J. Quinlan, K. Ramakrishnan, C. Sreenan
The dramatic growth of cellular video traffic represents a practical challenge for cellular network operators in providing a consistent streaming Quality of Experience (QoE) to their users. Satisfying this objective has so-far proved elusive, due to the inherent system complexities that degrade streaming performance, such as variability in both video bitrate and network conditions. In this paper, we present SAP as a DASH video traffic management solution that reduces playback stalls and seeks to maintain a consistent QoE for cellular users, even those with diverse channel conditions. SAP achieves this by leveraging both network and client state information to optimize the pacing of individual video flows. We extensively evaluate SAP performance using real video content and clients, operating over a simulated LTE network. We implement state-of-the-art client adaptation and traffic management strategies for direct comparison. Our results, using a heavily loaded base station, show that SAP reduces the number of stalls and the average stall duration per session by up to 95%. Additionally, SAP ensures that clients with good channel conditions do not dominate available wireless resources, evidenced by a reduction of up to 40% in the standard deviation of the QoE metric.
{"title":"SAP: Stall-Aware Pacing for Improved DASH Video Experience in Cellular Networks","authors":"A. Zahran, Jason J. Quinlan, K. Ramakrishnan, C. Sreenan","doi":"10.1145/3083187.3083199","DOIUrl":"https://doi.org/10.1145/3083187.3083199","url":null,"abstract":"The dramatic growth of cellular video traffic represents a practical challenge for cellular network operators in providing a consistent streaming Quality of Experience (QoE) to their users. Satisfying this objective has so-far proved elusive, due to the inherent system complexities that degrade streaming performance, such as variability in both video bitrate and network conditions. In this paper, we present SAP as a DASH video traffic management solution that reduces playback stalls and seeks to maintain a consistent QoE for cellular users, even those with diverse channel conditions. SAP achieves this by leveraging both network and client state information to optimize the pacing of individual video flows. We extensively evaluate SAP performance using real video content and clients, operating over a simulated LTE network. We implement state-of-the-art client adaptation and traffic management strategies for direct comparison. Our results, using a heavily loaded base station, show that SAP reduces the number of stalls and the average stall duration per session by up to 95%. Additionally, SAP ensures that clients with good channel conditions do not dominate available wireless resources, evidenced by a reduction of up to 40% in the standard deviation of the QoE metric.","PeriodicalId":123321,"journal":{"name":"Proceedings of the 8th ACM on Multimedia Systems Conference","volume":"121 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130553542","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Konstantin Pogorelov, K. Randel, T. Lange, S. Eskeland, C. Griwodz, Dag Johansen, C. Spampinato, M. Taschwer, M. Lux, P. Schmidt, M. Riegler, P. Halvorsen
Bowel preparation (cleansing) is considered to be a key precondition for successful colonoscopy (endoscopic examination of the bowel). The degree of bowel cleansing directly affects the possibility to detect diseases and may influence decisions on screening and follow-up examination intervals. An accurate assessment of bowel preparation quality is therefore important. Despite the use of reliable and validated bowel preparation scales, the grading may vary from one doctor to another. An objective and automated assessment of bowel cleansing would contribute to reduce such inequalities and optimize use of medical resources. This would also be a valuable feature for automatic endoscopy reporting in the future. In this paper, we present Nerthus, a dataset containing videos from inside the gastrointestinal (GI) tract, showing different degrees of bowel cleansing. By providing this dataset, we invite multimedia researchers to contribute in the medical field by making systems automatically evaluate the quality of bowel cleansing for colonoscopy. Such innovations would probably contribute to improve the medical field of GI endoscopy.
{"title":"Nerthus: A Bowel Preparation Quality Video Dataset","authors":"Konstantin Pogorelov, K. Randel, T. Lange, S. Eskeland, C. Griwodz, Dag Johansen, C. Spampinato, M. Taschwer, M. Lux, P. Schmidt, M. Riegler, P. Halvorsen","doi":"10.1145/3083187.3083216","DOIUrl":"https://doi.org/10.1145/3083187.3083216","url":null,"abstract":"Bowel preparation (cleansing) is considered to be a key precondition for successful colonoscopy (endoscopic examination of the bowel). The degree of bowel cleansing directly affects the possibility to detect diseases and may influence decisions on screening and follow-up examination intervals. An accurate assessment of bowel preparation quality is therefore important. Despite the use of reliable and validated bowel preparation scales, the grading may vary from one doctor to another. An objective and automated assessment of bowel cleansing would contribute to reduce such inequalities and optimize use of medical resources. This would also be a valuable feature for automatic endoscopy reporting in the future. In this paper, we present Nerthus, a dataset containing videos from inside the gastrointestinal (GI) tract, showing different degrees of bowel cleansing. By providing this dataset, we invite multimedia researchers to contribute in the medical field by making systems automatically evaluate the quality of bowel cleansing for colonoscopy. Such innovations would probably contribute to improve the medical field of GI endoscopy.","PeriodicalId":123321,"journal":{"name":"Proceedings of the 8th ACM on Multimedia Systems Conference","volume":"1606 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129210469","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Anatoliy Zabrovskiy, Evgeny Kuzmin, E. Petrov, C. Timmerer, Christopher Müller
Today we can observe a plethora of adaptive video streaming services and media players which support interoperable formats like DASH and HLS. Most of the players and their rate adaptation algorithms work as a black box. We have developed a system for easy and rapid testing of media players under various network scenarios. In this paper, we introduce AdViSE, the Adaptive Video Streaming Evaluation framework for the automated testing of adaptive media players. The presented framework is used for the comparison and testing of media players in the context of adaptive video streaming over HTTP in web/HTML5 environments.; AB@The demonstration showcases a series of experiments with different media players under given context conditions (e.g., network shaping, delivery format). We will also demonstrate the real-time capabilities of the framework and offline analysis including several QoE metrics with respect to a newly introduced bandwidth index.
{"title":"AdViSE: Adaptive Video Streaming Evaluation Framework for the Automated Testing of Media Players","authors":"Anatoliy Zabrovskiy, Evgeny Kuzmin, E. Petrov, C. Timmerer, Christopher Müller","doi":"10.1145/3083187.3083221","DOIUrl":"https://doi.org/10.1145/3083187.3083221","url":null,"abstract":"Today we can observe a plethora of adaptive video streaming services and media players which support interoperable formats like DASH and HLS. Most of the players and their rate adaptation algorithms work as a black box. We have developed a system for easy and rapid testing of media players under various network scenarios. In this paper, we introduce AdViSE, the Adaptive Video Streaming Evaluation framework for the automated testing of adaptive media players. The presented framework is used for the comparison and testing of media players in the context of adaptive video streaming over HTTP in web/HTML5 environments.; AB@The demonstration showcases a series of experiments with different media players under given context conditions (e.g., network shaping, delivery format). We will also demonstrate the real-time capabilities of the framework and offline analysis including several QoE metrics with respect to a newly introduced bandwidth index.","PeriodicalId":123321,"journal":{"name":"Proceedings of the 8th ACM on Multimedia Systems Conference","volume":"126 3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131337306","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Stefano Petrangeli, F. Turck, Viswanathan Swaminathan, Mohammad Hosseini
The demand for 360° Virtual Reality (VR) videos is expected to grow in the near future, thanks to the diffusion of VR headsets. VR Streaming is however challenged by the high bandwidth requirements of 360° videos. To save bandwidth, we spatially tile the video using the H.265 standard and stream only tiles in view at the highest quality. The video is also temporally segmented, so that each temporal segment is composed of several spatial tiles. In order to minimize quality transitions when the user moves, an algorithm is developed to predict where the user is likely going to watch in the near future. Consequently, predicted tiles are also streamed at the highest quality. Finally, the server push in HTTP/2 is used to deliver the tiled video. Only one request is sent from the client; all the tiles of a segment are automatically pushed from the server. This approach results in a better bandwidth utilization and video quality compared to traditional streaming over HTTP/1.1, where each tile has to be requested independently by the client. We showcase the benefits of our framework using a prototype developed on a Samsung Galaxy S7 and a Gear VR, which supports both tiled and non-tiled videos and streaming over HTTP/1.1 and HTTP/2. Under limited bandwidth conditions, we demonstrate how our framework can improve the quality watched by the user compared to a non-tiled solution where all of the video is streamed at the same quality. This result represents a major improvement for the efficient streaming of VR videos.
{"title":"Improving Virtual Reality Streaming using HTTP/2","authors":"Stefano Petrangeli, F. Turck, Viswanathan Swaminathan, Mohammad Hosseini","doi":"10.1145/3083187.3083224","DOIUrl":"https://doi.org/10.1145/3083187.3083224","url":null,"abstract":"The demand for 360° Virtual Reality (VR) videos is expected to grow in the near future, thanks to the diffusion of VR headsets. VR Streaming is however challenged by the high bandwidth requirements of 360° videos. To save bandwidth, we spatially tile the video using the H.265 standard and stream only tiles in view at the highest quality. The video is also temporally segmented, so that each temporal segment is composed of several spatial tiles. In order to minimize quality transitions when the user moves, an algorithm is developed to predict where the user is likely going to watch in the near future. Consequently, predicted tiles are also streamed at the highest quality. Finally, the server push in HTTP/2 is used to deliver the tiled video. Only one request is sent from the client; all the tiles of a segment are automatically pushed from the server. This approach results in a better bandwidth utilization and video quality compared to traditional streaming over HTTP/1.1, where each tile has to be requested independently by the client. We showcase the benefits of our framework using a prototype developed on a Samsung Galaxy S7 and a Gear VR, which supports both tiled and non-tiled videos and streaming over HTTP/1.1 and HTTP/2. Under limited bandwidth conditions, we demonstrate how our framework can improve the quality watched by the user compared to a non-tiled solution where all of the video is streamed at the same quality. This result represents a major improvement for the efficient streaming of VR videos.","PeriodicalId":123321,"journal":{"name":"Proceedings of the 8th ACM on Multimedia Systems Conference","volume":"191 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122492660","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Chenguang Yu, Hao Ding, Houwei Cao, Yong Liu, Can Yang
Compared with the traditional television services, Internet Protocol TV (IPTV) can provide far more TV channels to end users. However, it may also make users feel confused even painful to find channels of their interests from a large number of them. In this paper, using a large IPTV trace, we analyze user channel-switching behaviors to understand when, why and how they switch channels. Based on user behavior analysis, we develop several base and fusion recommender systems that generate in real-time a short list of channels for users to consider whenever they want to switch channels. Evaluation on the IPTV trace demonstrates that our recommender systems can achieve up to 45 percent hit ratio with only three candidate channels. Our recommender systems only need access to user channel watching sequences, and can be easily adopted by IPTV systems with low data and computation overheads.
{"title":"Follow Me: Personalized IPTV Channel Switching Guide","authors":"Chenguang Yu, Hao Ding, Houwei Cao, Yong Liu, Can Yang","doi":"10.1145/3083187.3083194","DOIUrl":"https://doi.org/10.1145/3083187.3083194","url":null,"abstract":"Compared with the traditional television services, Internet Protocol TV (IPTV) can provide far more TV channels to end users. However, it may also make users feel confused even painful to find channels of their interests from a large number of them. In this paper, using a large IPTV trace, we analyze user channel-switching behaviors to understand when, why and how they switch channels. Based on user behavior analysis, we develop several base and fusion recommender systems that generate in real-time a short list of channels for users to consider whenever they want to switch channels. Evaluation on the IPTV trace demonstrates that our recommender systems can achieve up to 45 percent hit ratio with only three candidate channels. Our recommender systems only need access to user channel watching sequences, and can be easily adopted by IPTV systems with low data and computation overheads.","PeriodicalId":123321,"journal":{"name":"Proceedings of the 8th ACM on Multimedia Systems Conference","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129207136","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This study presents a Virtual Elderly Companion Agent that based on speech spectrograms and deep convolutional neural networks. The system can dynamically detect and analyze the user's emotion from the dialogue and give appropriate positive feedback. The proposed system architecture is divided into two parts. The client side supports Android operating system; the server side is implemented in python, and applied GoogleLeNet and AlexNet for emotion recognition. The system supports natural language speech input, and then analyzes the converted speech spectrogram to provide appropriate feedback.
{"title":"A Deep Convolutional Neural Network Based Virtual Elderly Companion Agent","authors":"Ming-Che Lee, Sheng-Cheng Yeh, Sheng Yu Chiu, Jia-Wei Chang","doi":"10.1145/3083187.3083220","DOIUrl":"https://doi.org/10.1145/3083187.3083220","url":null,"abstract":"This study presents a Virtual Elderly Companion Agent that based on speech spectrograms and deep convolutional neural networks. The system can dynamically detect and analyze the user's emotion from the dialogue and give appropriate positive feedback. The proposed system architecture is divided into two parts. The client side supports Android operating system; the server side is implemented in python, and applied GoogleLeNet and AlexNet for emotion recognition. The system supports natural language speech input, and then analyzes the converted speech spectrogram to provide appropriate feedback.","PeriodicalId":123321,"journal":{"name":"Proceedings of the 8th ACM on Multimedia Systems Conference","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125668162","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sergio Cabrero Barros, Jack Jansen, Thomas Röggla, J. Gómez, David A. Shamma, Pablo César
The CWI-ADE2016 Dataset is a collection of more than 40 million Bluetooth Low Energy (BLE) packets and of 14 million accelerometer and temperature samples generated by wristbands that people wore in a nightclub. The data was gathered during Amsterdam Dance Event 2016 in an exclusive club experience curated around human senses, which leveraged technology as a bridge between the club and the guests. Each guest was handed a custom-made wristband with a BLE-enabled device that broadcast movement, temperature and other sensor readings. A network of Raspberry Pi receivers deployed for the occasion captured broadcast packets from wristbands and any other BLE device in the environment. This data provides a full picture of the performance of the real life deployment of a sensing infrastructure and gives insights to designing sensing platforms, understanding networks and crowds behaviour or studying opportunistic sensing. This paper describes an analysis of this dataset and some examples of usage.
{"title":"CWI-ADE2016 Dataset: Sensing nightclubs through 40 million BLE packets","authors":"Sergio Cabrero Barros, Jack Jansen, Thomas Röggla, J. Gómez, David A. Shamma, Pablo César","doi":"10.1145/3083187.3083213","DOIUrl":"https://doi.org/10.1145/3083187.3083213","url":null,"abstract":"The CWI-ADE2016 Dataset is a collection of more than 40 million Bluetooth Low Energy (BLE) packets and of 14 million accelerometer and temperature samples generated by wristbands that people wore in a nightclub. The data was gathered during Amsterdam Dance Event 2016 in an exclusive club experience curated around human senses, which leveraged technology as a bridge between the club and the guests. Each guest was handed a custom-made wristband with a BLE-enabled device that broadcast movement, temperature and other sensor readings. A network of Raspberry Pi receivers deployed for the occasion captured broadcast packets from wristbands and any other BLE device in the environment. This data provides a full picture of the performance of the real life deployment of a sensing infrastructure and gives insights to designing sensing platforms, understanding networks and crowds behaviour or studying opportunistic sensing. This paper describes an analysis of this dataset and some examples of usage.","PeriodicalId":123321,"journal":{"name":"Proceedings of the 8th ACM on Multimedia Systems Conference","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122735020","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Vengatanathan Krishnamoorthi, Niklas Carlsson, Emir Halepovic, E. Petajan
Stalls during video playback are perhaps the most important indicator of a client's viewing experience. To provide the best possible service, a proactive network operator may therefore want to know the buffer conditions of streaming clients and use this information to help avoid stalls due to empty buffers. However, estimation of clients' buffer conditions is complicated by most streaming services being rate-adaptive, and many of them also encrypted. Rate adaptation reduces the correlation between network throughput and client buffer conditions. Usage of HTTPS prevents operators from observing information related to video chunk requests, such as indications of rate adaptation or other HTTP-level information. This paper presents BUFFEST, a novel classification framework that can be used to classify and predict streaming clients' buffer conditions from both HTTP and HTTPS traffic. To illustrate the tradeoffs between prediction accuracy and the available information used by classifiers, we design and evaluate classifiers of different complexity. At the core of BUFFEST is an event-based buffer emulator module for detailed analysis of clients' buffer levels throughout a streaming session, as well as for automated training and evaluation of online packet-level classifiers. We then present example results using simple threshold-based classifiers and machine learning classifiers that only use TCP/IP packet-level information. Our results are encouraging and show that BUFFEST can distinguish streaming clients with low buffer conditions from clients with significant buffer margin during a session even when HTTPS is used.
{"title":"BUFFEST: Predicting Buffer Conditions and Real-time Requirements of HTTP(S) Adaptive Streaming Clients","authors":"Vengatanathan Krishnamoorthi, Niklas Carlsson, Emir Halepovic, E. Petajan","doi":"10.1145/3083187.3083193","DOIUrl":"https://doi.org/10.1145/3083187.3083193","url":null,"abstract":"Stalls during video playback are perhaps the most important indicator of a client's viewing experience. To provide the best possible service, a proactive network operator may therefore want to know the buffer conditions of streaming clients and use this information to help avoid stalls due to empty buffers. However, estimation of clients' buffer conditions is complicated by most streaming services being rate-adaptive, and many of them also encrypted. Rate adaptation reduces the correlation between network throughput and client buffer conditions. Usage of HTTPS prevents operators from observing information related to video chunk requests, such as indications of rate adaptation or other HTTP-level information. This paper presents BUFFEST, a novel classification framework that can be used to classify and predict streaming clients' buffer conditions from both HTTP and HTTPS traffic. To illustrate the tradeoffs between prediction accuracy and the available information used by classifiers, we design and evaluate classifiers of different complexity. At the core of BUFFEST is an event-based buffer emulator module for detailed analysis of clients' buffer levels throughout a streaming session, as well as for automated training and evaluation of online packet-level classifiers. We then present example results using simple threshold-based classifiers and machine learning classifiers that only use TCP/IP packet-level information. Our results are encouraging and show that BUFFEST can distinguish streaming clients with low buffer conditions from clients with significant buffer margin during a session even when HTTPS is used.","PeriodicalId":123321,"journal":{"name":"Proceedings of the 8th ACM on Multimedia Systems Conference","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129006398","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Estêvão Bissoli Saleme, J. R. Celestrini, Celso A. S. Santos
Mulsemedia applications have become increasingly popular. There have been many efforts to increase the Quality of Experience (QoE) of users by using them. From the users' perspective, it is crucial that systems produce high levels of enjoyment and utility. Thus, many experimental tools have been developed and applied to different purposes such as entertainment, health, and culture. Despite that, little attention is paid to the evaluation of mulsemedia tools and platforms. In this paper, we present a time evaluation of the integration between a distributed mulsemedia platform called PlaySEM and an interactive application whereby users interact by gestures, in order to discover how long this process takes. We describe the test scenario and our approach for measuring this integration. Then, we discuss the results and point out aspects that bring implications to be taken into account for future similar solutions. The results showed values in the range of 27ms to 67ms on average spent throughout the process before the effective activation of the sensory effect devices on a wired network.
{"title":"Time Evaluation for the Integration of a Gestural Interactive Application with a Distributed Mulsemedia Platform","authors":"Estêvão Bissoli Saleme, J. R. Celestrini, Celso A. S. Santos","doi":"10.1145/3083187.3084013","DOIUrl":"https://doi.org/10.1145/3083187.3084013","url":null,"abstract":"Mulsemedia applications have become increasingly popular. There have been many efforts to increase the Quality of Experience (QoE) of users by using them. From the users' perspective, it is crucial that systems produce high levels of enjoyment and utility. Thus, many experimental tools have been developed and applied to different purposes such as entertainment, health, and culture. Despite that, little attention is paid to the evaluation of mulsemedia tools and platforms. In this paper, we present a time evaluation of the integration between a distributed mulsemedia platform called PlaySEM and an interactive application whereby users interact by gestures, in order to discover how long this process takes. We describe the test scenario and our approach for measuring this integration. Then, we discuss the results and point out aspects that bring implications to be taken into account for future similar solutions. The results showed values in the range of 27ms to 67ms on average spent throughout the process before the effective activation of the sensory effect devices on a wired network.","PeriodicalId":123321,"journal":{"name":"Proceedings of the 8th ACM on Multimedia Systems Conference","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126577957","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}