Pub Date : 2020-09-16DOI: 10.1109/SEC50012.2020.00046
Sultan Alshamrani, Ahmed A. Abusnaina, David A. Mohaisen
The Internet has become an essential part of children’s and adolescents’ daily life. Social media platforms are used as educational and entertainment resources on daily bases by young users, leading enormous efforts to ensure their safety when interacting with various social media platforms. In this paper, we investigate the exposure of those users to inappropriate and malicious content in comments posted on YouTube videos targeting this demographic. We collected a large-scale dataset of approximately four million records, and studied the presence of malicious and inappropriate URLs embedded in the comments posted on these videos. Our results show a worrisome number of malicious and inappropriate URLs embedded in comments available for children and young users. In particular, we observe an alarming number of inappropriate and malicious URLs, with a high chance of kids exposure, since the average number of views on videos containing such URLs is 48 million. When using such platforms, children are not only exposed to the material available in the platform, but also to the content of the URLs embedded within the comments. This highlights the importance of monitoring the URLs provided within the comments, limiting the children’s exposure to inappropriate content.
{"title":"Hiding in Plain Sight: A Measurement and Analysis of Kids’ Exposure to Malicious URLs on YouTube","authors":"Sultan Alshamrani, Ahmed A. Abusnaina, David A. Mohaisen","doi":"10.1109/SEC50012.2020.00046","DOIUrl":"https://doi.org/10.1109/SEC50012.2020.00046","url":null,"abstract":"The Internet has become an essential part of children’s and adolescents’ daily life. Social media platforms are used as educational and entertainment resources on daily bases by young users, leading enormous efforts to ensure their safety when interacting with various social media platforms. In this paper, we investigate the exposure of those users to inappropriate and malicious content in comments posted on YouTube videos targeting this demographic. We collected a large-scale dataset of approximately four million records, and studied the presence of malicious and inappropriate URLs embedded in the comments posted on these videos. Our results show a worrisome number of malicious and inappropriate URLs embedded in comments available for children and young users. In particular, we observe an alarming number of inappropriate and malicious URLs, with a high chance of kids exposure, since the average number of views on videos containing such URLs is 48 million. When using such platforms, children are not only exposed to the material available in the platform, but also to the content of the URLs embedded within the comments. This highlights the importance of monitoring the URLs provided within the comments, limiting the children’s exposure to inappropriate content.","PeriodicalId":375577,"journal":{"name":"2020 IEEE/ACM Symposium on Edge Computing (SEC)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130517642","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-08-28DOI: 10.1109/SEC50012.2020.00037
Landon P. Cox, Lixiang Ao
Game livestreaming is hugely popular and growing. Each month, Twitch hosts over two million unique broadcasters with a collective audience of 140 million unique viewers. Despite its success, livestreaming services are costly to run. AWS and Azure both charge hundreds of dollars to encode 100 hours of multi-bitrate video, and potentially thousands each month to transfer the video data of one gamer to a relatively small audience.In this work, we demonstrate that mobile edge devices are ready to play a more central role in multi-bitrate livestreaming. In particular, we explore a new strategy for game livestreaming that we call a thin-cloud approach. Under a thin-cloud approach, livestreaming services rely on commodity web infrastructure to store and distribute video content and leverage hardware acceleration on edge devices to transcode video and boost the video quality of low-bitrate streams. We have built a prototype system called LevelUp that embodies the thin-cloud approach, and using our prototype we demonstrate that mobile hardware acceleration can support real-time video transcoding and significantly boost the quality of low-bitrate video through a machine-learning technique called super resolution. We show that super-resolution can improve the visual quality of low-resolution game streams by up to 88% while requiring approximately half the bandwidth of higher-bitrate streams. Finally, energy experiments show that LevelUp clients consume only 5% of their battery capacity watching 30 minutes of video.
{"title":"LevelUp: A thin-cloud approach to game livestreaming","authors":"Landon P. Cox, Lixiang Ao","doi":"10.1109/SEC50012.2020.00037","DOIUrl":"https://doi.org/10.1109/SEC50012.2020.00037","url":null,"abstract":"Game livestreaming is hugely popular and growing. Each month, Twitch hosts over two million unique broadcasters with a collective audience of 140 million unique viewers. Despite its success, livestreaming services are costly to run. AWS and Azure both charge hundreds of dollars to encode 100 hours of multi-bitrate video, and potentially thousands each month to transfer the video data of one gamer to a relatively small audience.In this work, we demonstrate that mobile edge devices are ready to play a more central role in multi-bitrate livestreaming. In particular, we explore a new strategy for game livestreaming that we call a thin-cloud approach. Under a thin-cloud approach, livestreaming services rely on commodity web infrastructure to store and distribute video content and leverage hardware acceleration on edge devices to transcode video and boost the video quality of low-bitrate streams. We have built a prototype system called LevelUp that embodies the thin-cloud approach, and using our prototype we demonstrate that mobile hardware acceleration can support real-time video transcoding and significantly boost the quality of low-bitrate video through a machine-learning technique called super resolution. We show that super-resolution can improve the visual quality of low-resolution game streams by up to 88% while requiring approximately half the bandwidth of higher-bitrate streams. Finally, energy experiments show that LevelUp clients consume only 5% of their battery capacity watching 30 minutes of video.","PeriodicalId":375577,"journal":{"name":"2020 IEEE/ACM Symposium on Edge Computing (SEC)","volume":"211 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121345131","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-08-23DOI: 10.1109/SEC50012.2020.00016
Samvit Jain, Xun Zhang, Yuhao Zhou, G. Ananthanarayanan, Junchen Jiang, Yuanchao Shu, P. Bahl, Joseph Gonzalez
Cameras are deployed at scale with the purpose of searching and tracking objects of interest (e.g., a suspected person) through the camera network on live videos. Such cross-camera analytics is data and compute intensive, whose costs grow with the number of cameras and time. We present Spatula, a cost-efficient system that enables scaling cross-camera analytics on edge compute boxes to large camera networks by leveraging the spatial and temporal cross-camera correlations. While such correlations have been used in computer vision community, Spatula uses them to drastically reduce the communication and computation costs by pruning search space of a query identity (e.g., ignoring frames not correlated with the query identity’s current position). Spatula provides the first system substrate on which cross-camera analytics applications can be built to efficiently harness the cross-camera correlations that are abundant in large camera deployments. Spatula reduces compute load by $8.3times$ on an 8-camera dataset, and by $23times-86times$ on two datasets with hundreds of cameras (simulated from real vehicle/pedestrian traces). We have also implemented Spatula on a testbed of 5 AWS DeepLens cameras.
{"title":"Spatula: Efficient cross-camera video analytics on large camera networks","authors":"Samvit Jain, Xun Zhang, Yuhao Zhou, G. Ananthanarayanan, Junchen Jiang, Yuanchao Shu, P. Bahl, Joseph Gonzalez","doi":"10.1109/SEC50012.2020.00016","DOIUrl":"https://doi.org/10.1109/SEC50012.2020.00016","url":null,"abstract":"Cameras are deployed at scale with the purpose of searching and tracking objects of interest (e.g., a suspected person) through the camera network on live videos. Such cross-camera analytics is data and compute intensive, whose costs grow with the number of cameras and time. We present Spatula, a cost-efficient system that enables scaling cross-camera analytics on edge compute boxes to large camera networks by leveraging the spatial and temporal cross-camera correlations. While such correlations have been used in computer vision community, Spatula uses them to drastically reduce the communication and computation costs by pruning search space of a query identity (e.g., ignoring frames not correlated with the query identity’s current position). Spatula provides the first system substrate on which cross-camera analytics applications can be built to efficiently harness the cross-camera correlations that are abundant in large camera deployments. Spatula reduces compute load by $8.3times$ on an 8-camera dataset, and by $23times-86times$ on two datasets with hundreds of cameras (simulated from real vehicle/pedestrian traces). We have also implemented Spatula on a testbed of 5 AWS DeepLens cameras.","PeriodicalId":375577,"journal":{"name":"2020 IEEE/ACM Symposium on Edge Computing (SEC)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115188947","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}