Gautham Ramajayam, Tao Sun, C. C. Tan, Lannan Luo, Haibin Ling
The augmented reality (AR) Metaverse environment combines the physical and virtual world together. Privacy is a major concern in AR since the cameras use to capture the physical world can also capture other images that may potentially violate user or by-stander privacy. Advances in deep learning to process images and videos have exacerbated such privacy risks. This paper presents a new technique to protect privacy in AR systems by combining the idea of visual saliency together with privacy-sensitive object detection. We show that our technique is able to provide additional context to a given image to better balance between privacy and overall usability of the system.
{"title":"Saliency-Aware Privacy Protection in Augmented Reality Systems","authors":"Gautham Ramajayam, Tao Sun, C. C. Tan, Lannan Luo, Haibin Ling","doi":"10.1145/3597063.3597358","DOIUrl":"https://doi.org/10.1145/3597063.3597358","url":null,"abstract":"The augmented reality (AR) Metaverse environment combines the physical and virtual world together. Privacy is a major concern in AR since the cameras use to capture the physical world can also capture other images that may potentially violate user or by-stander privacy. Advances in deep learning to process images and videos have exacerbated such privacy risks. This paper presents a new technique to protect privacy in AR systems by combining the idea of visual saliency together with privacy-sensitive object detection. We show that our technique is able to provide additional context to a given image to better balance between privacy and overall usability of the system.","PeriodicalId":447264,"journal":{"name":"Proceedings of the First Workshop on Metaverse Systems and Applications","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130270559","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kirill A. Shatilov, A. Alhilal, Tristan Braud, Lik-Hang Lee, Pengyuan Zhou, Pan Hui
While the academic community tries to define and experiment with the metaverse, businesses and institutions seek to build their representation in the metaverse. Many educational institutions build meta-campuses and move online classes into virtual environments beyond simple videoconferencing. This paper describes our experience building a university metaverse, highlighting the technical, human, and organisational challenges we have encountered through two major events. Considering the issue of real-time communication and scalability in the web-based metaverse, we also present an analysis of video streaming in virtual reality (VR) platforms: Meta's Workroom, Spatial, and Mozilla Hubs.
{"title":"Players are not Ready 101: A Tutorial on Organising Mixed-mode Events in the Metaverse","authors":"Kirill A. Shatilov, A. Alhilal, Tristan Braud, Lik-Hang Lee, Pengyuan Zhou, Pan Hui","doi":"10.1145/3597063.3597360","DOIUrl":"https://doi.org/10.1145/3597063.3597360","url":null,"abstract":"While the academic community tries to define and experiment with the metaverse, businesses and institutions seek to build their representation in the metaverse. Many educational institutions build meta-campuses and move online classes into virtual environments beyond simple videoconferencing. This paper describes our experience building a university metaverse, highlighting the technical, human, and organisational challenges we have encountered through two major events. Considering the issue of real-time communication and scalability in the web-based metaverse, we also present an analysis of video streaming in virtual reality (VR) platforms: Meta's Workroom, Spatial, and Mozilla Hubs.","PeriodicalId":447264,"journal":{"name":"Proceedings of the First Workshop on Metaverse Systems and Applications","volume":"143 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134387200","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
AR-assisted multi-UAV-UGV systems are versatile robotic platforms for challenging missions. However, these systems face several challenges, such as energy constraints, limited computation capability, and intermittent network connectivity. In this paper, we present RAREST (AR Assisted GRound and AErial Sytem Twin), a digital twin (DT) framework for emulating such systems. RAREST simulates the CPU workload and energy consumption of each UAV based on different tasks, enables the cooperation and offloading between UAVs and UGVs, and provides an AR interface for human users to interact with the system. We describe the envisioned framework, its potential use cases, and its benefits over pure simulations. We also report our preliminary work simulating CPU workload and energy consumption for different object recognition tasks on UAVs and how this framework can be expanded and implemented in the future.
{"title":"RAREST: Emulation of Augmented Reality Assisted Multi-UAV-UGV Systems","authors":"Benjamin P. Carlson, Chenyang Wang, Qifeng Han","doi":"10.1145/3597063.3597361","DOIUrl":"https://doi.org/10.1145/3597063.3597361","url":null,"abstract":"AR-assisted multi-UAV-UGV systems are versatile robotic platforms for challenging missions. However, these systems face several challenges, such as energy constraints, limited computation capability, and intermittent network connectivity. In this paper, we present RAREST (AR Assisted GRound and AErial Sytem Twin), a digital twin (DT) framework for emulating such systems. RAREST simulates the CPU workload and energy consumption of each UAV based on different tasks, enables the cooperation and offloading between UAVs and UGVs, and provides an AR interface for human users to interact with the system. We describe the envisioned framework, its potential use cases, and its benefits over pure simulations. We also report our preliminary work simulating CPU workload and energy consumption for different object recognition tasks on UAVs and how this framework can be expanded and implemented in the future.","PeriodicalId":447264,"journal":{"name":"Proceedings of the First Workshop on Metaverse Systems and Applications","volume":"123 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132608348","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In remote rendered virtual reality (VR), the rendering of the application is moved to the cloud enabling high quality real-time content to be consumed on low-powered standalone head mounted displays (HMDs). The rendered frames are encoded to a video stream and streamed to a thin client which relays user's input to the server and decodes and displays the incoming video. Latency and high bandwidth requirements are key challenges for remote rendered graphics. Foveation can be used to optimize the quality of the transmitted frames to be in line with the human visual system (HVS). In this paper we evaluate multiple different strategies on how to apply foveation to spatially compress video frames, i.e., reduce their resolution, before transmission. We also show how the foveation methods can be used together with super resolution to alleviate the bandwidth usage of real-time remote rendered VR and optimize the perceived image quality.
{"title":"Foveated Spatial Compression for Remote Rendered Virtual Reality","authors":"Teemu Kämäräinen, M. Siekkinen","doi":"10.1145/3597063.3597359","DOIUrl":"https://doi.org/10.1145/3597063.3597359","url":null,"abstract":"In remote rendered virtual reality (VR), the rendering of the application is moved to the cloud enabling high quality real-time content to be consumed on low-powered standalone head mounted displays (HMDs). The rendered frames are encoded to a video stream and streamed to a thin client which relays user's input to the server and decodes and displays the incoming video. Latency and high bandwidth requirements are key challenges for remote rendered graphics. Foveation can be used to optimize the quality of the transmitted frames to be in line with the human visual system (HVS). In this paper we evaluate multiple different strategies on how to apply foveation to spatially compress video frames, i.e., reduce their resolution, before transmission. We also show how the foveation methods can be used together with super resolution to alleviate the bandwidth usage of real-time remote rendered VR and optimize the perceived image quality.","PeriodicalId":447264,"journal":{"name":"Proceedings of the First Workshop on Metaverse Systems and Applications","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128134201","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Proceedings of the First Workshop on Metaverse Systems and Applications","authors":"","doi":"10.1145/3597063","DOIUrl":"https://doi.org/10.1145/3597063","url":null,"abstract":"","PeriodicalId":447264,"journal":{"name":"Proceedings of the First Workshop on Metaverse Systems and Applications","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131683724","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}