首页 > 最新文献

IEEE MultiMedia最新文献

英文 中文
CS-generic-full-km-2023.indd cs -通用-全部-公里- 2023. indd
IF 3.2 4区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2023-04-01 DOI: 10.1109/mmul.2023.3280642
{"title":"CS-generic-full-km-2023.indd","authors":"","doi":"10.1109/mmul.2023.3280642","DOIUrl":"https://doi.org/10.1109/mmul.2023.3280642","url":null,"abstract":"","PeriodicalId":13240,"journal":{"name":"IEEE MultiMedia","volume":" ","pages":""},"PeriodicalIF":3.2,"publicationDate":"2023-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42948189","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Could Head Motions Affect Quality When Viewing 360° Videos? 观看360°视频时,头部运动是否会影响质量?
IF 3.2 4区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2023-04-01 DOI: 10.1109/MMUL.2022.3215089
Burak Kara, Mehmet N. Akcay, A. Begen, Saba Ahsan, I. Curcio, Emre B. Aksu
Measuring quality accurately and quickly (preferably in real time) when streaming 360$^circ$∘ videos is essential to enhance the user experience. Most quality-of-experience metrics have primarily used viewport quality as a simple surrogate for such experiences at a given time. While this baseline approach has been later augmented by some researchers using pupil and gaze tracking, head tracking has not been considered in enough detail. This article tackles whether head motions can influence the perception of 360$^circ$∘ videos. Inspired by the latest research, this article conceptualizes a head-motion-aware metric for measuring viewport quality. A comparative study against existing head-motion-unaware metrics reveals sizeable differences. Motivated by this, we invite the community to research this topic further and substantiate the new metric’s validity.
在播放360°°视频时,准确、快速地(最好是实时地)测量质量对于增强用户体验至关重要。大多数体验质量指标主要使用视口质量作为给定时间内此类体验的简单替代。虽然这一基线方法后来被一些研究人员利用瞳孔和目光跟踪加以增强,但头部跟踪还没有得到足够详细的考虑。本文探讨头部运动是否会影响360°视频的感知。受最新研究的启发,本文构想了一种用于测量视口质量的头部运动感知度量。一项针对现有头部运动不感知指标的比较研究揭示了相当大的差异。在此激励下,我们邀请社区进一步研究这一主题,并证实新度量的有效性。
{"title":"Could Head Motions Affect Quality When Viewing 360° Videos?","authors":"Burak Kara, Mehmet N. Akcay, A. Begen, Saba Ahsan, I. Curcio, Emre B. Aksu","doi":"10.1109/MMUL.2022.3215089","DOIUrl":"https://doi.org/10.1109/MMUL.2022.3215089","url":null,"abstract":"Measuring quality accurately and quickly (preferably in real time) when streaming 360$^circ$∘ videos is essential to enhance the user experience. Most quality-of-experience metrics have primarily used viewport quality as a simple surrogate for such experiences at a given time. While this baseline approach has been later augmented by some researchers using pupil and gaze tracking, head tracking has not been considered in enough detail. This article tackles whether head motions can influence the perception of 360$^circ$∘ videos. Inspired by the latest research, this article conceptualizes a head-motion-aware metric for measuring viewport quality. A comparative study against existing head-motion-unaware metrics reveals sizeable differences. Motivated by this, we invite the community to research this topic further and substantiate the new metric’s validity.","PeriodicalId":13240,"journal":{"name":"IEEE MultiMedia","volume":"30 1","pages":"28-37"},"PeriodicalIF":3.2,"publicationDate":"2023-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47273419","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Passthrough Mixed Reality With Oculus Quest 2: A Case Study on Learning Piano 通过混合现实与Oculus Quest 2:学习钢琴的案例研究
IF 3.2 4区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2023-04-01 DOI: 10.1109/MMUL.2022.3232892
Mariano M. Banquiero, Gracia Valdeolivas, Sergio Trincado, Natasha Garcia, M. Juan
Mixed reality (MR) in standalone headsets has many advantages over other types of devices. With the recent appearance of the Passthrough of Oculus Quest 2, new possibilities open up. This work details the features of the current Passthrough and how its potential was harnessed and its drawbacks minimized for developing a satisfying MR experience. It has been applied to learning to play the piano as a use case. A total of 33 piano students participated in a study to compare participants’ interpretation outcomes and subjective experience when using a MR application for learning piano with two visualization modes (border lines on all the keys (Wireframe) versus solid color hiding the real keys (Solid)). The two visualization modes provided a satisfying experience. Even though there were no significant differences in the analyzed variables, the students preferred the Solid mode, indicating that short-distance Passthrough limitations should be minimized in application development.
独立头戴式耳机中的混合现实(MR)与其他类型的设备相比具有许多优势。随着最近Oculus Quest 2的Passthrough的出现,新的可能性打开了。这项工作详细介绍了当前直通的特点,以及如何利用其潜力,并将其缺点降到最低,以开发令人满意的MR体验。它已经作为一个用例应用于学习弹钢琴。共有33名钢琴学生参加了一项研究,以比较参与者在使用磁共振应用程序学习钢琴时的两种可视化模式(所有键的边界线(线框)和隐藏真实键的纯色(纯色))的解释结果和主观体验。这两种可视化模式提供了令人满意的体验。尽管在分析的变量中没有显著差异,但学生更喜欢Solid模式,这表明在应用程序开发中应尽量减少短距离直通的限制。
{"title":"Passthrough Mixed Reality With Oculus Quest 2: A Case Study on Learning Piano","authors":"Mariano M. Banquiero, Gracia Valdeolivas, Sergio Trincado, Natasha Garcia, M. Juan","doi":"10.1109/MMUL.2022.3232892","DOIUrl":"https://doi.org/10.1109/MMUL.2022.3232892","url":null,"abstract":"Mixed reality (MR) in standalone headsets has many advantages over other types of devices. With the recent appearance of the Passthrough of Oculus Quest 2, new possibilities open up. This work details the features of the current Passthrough and how its potential was harnessed and its drawbacks minimized for developing a satisfying MR experience. It has been applied to learning to play the piano as a use case. A total of 33 piano students participated in a study to compare participants’ interpretation outcomes and subjective experience when using a MR application for learning piano with two visualization modes (border lines on all the keys (Wireframe) versus solid color hiding the real keys (Solid)). The two visualization modes provided a satisfying experience. Even though there were no significant differences in the analyzed variables, the students preferred the Solid mode, indicating that short-distance Passthrough limitations should be minimized in application development.","PeriodicalId":13240,"journal":{"name":"IEEE MultiMedia","volume":"30 1","pages":"60-69"},"PeriodicalIF":3.2,"publicationDate":"2023-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45393533","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Edge-Assisted Virtual Viewpoint Generation for Immersive Light Field 沉浸式光场边缘辅助虚拟视点生成
IF 3.2 4区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2023-04-01 DOI: 10.1109/MMUL.2022.3232771
Xinjue Hu, Chen-chao Wang, Lin Zhang, Guo Chen, S. Shirmohammadi
Light field (LF), which describes the light rays that emanate at each point in a scene, can be used as a six-degrees-of-freedom (6DOF) immersive media. Similar to the traditional multiview video, LF is also captured by an array of cameras, leading to a large data volume that needs to be streamed from a server to users. When a user wishes to watch the scene from a viewpoint that no camera has captured directly, a virtual viewpoint must be rendered in real time from the directly captured viewpoints. This places high requirements on both the computing and caching capabilities of the infrastructure. Edge computing (EC), which brings computation resources closer to users, can be a promising enabler for real-time LF viewpoint rendering. In this article, we present a novel EC-assisted mobile LF delivery framework that is able to cache parts of LF viewpoints in advance and render the requested virtual viewpoints on demand at the edge node or user’s device. Numerical results demonstrate that the proposed framework can reduce the average service response latency by 45% and the energy consumption of user equipment by 60% at the cost of 55% additional caching consumption of edge nodes.
光场(LF)描述了场景中每个点发出的光线,可以用作六自由度(6DOF)沉浸式媒体。与传统的多视角视频类似,LF也由一系列摄像机拍摄,导致需要从服务器向用户传输大量数据。当用户希望从没有相机直接捕捉到的视点观看场景时,必须从直接捕捉的视点实时渲染虚拟视点。这对基础设施的计算和缓存能力提出了很高的要求。边缘计算(EC)使计算资源更接近用户,是实时LF视点渲染的一个很有前途的推动者。在本文中,我们提出了一种新的EC辅助移动LF交付框架,该框架能够提前缓存LF视点的部分,并在边缘节点或用户设备上按需呈现所请求的虚拟视点。数值结果表明,该框架可以将平均服务响应延迟降低45%,将用户设备的能耗降低60%,而代价是边缘节点额外消耗55%的缓存。
{"title":"Edge-Assisted Virtual Viewpoint Generation for Immersive Light Field","authors":"Xinjue Hu, Chen-chao Wang, Lin Zhang, Guo Chen, S. Shirmohammadi","doi":"10.1109/MMUL.2022.3232771","DOIUrl":"https://doi.org/10.1109/MMUL.2022.3232771","url":null,"abstract":"Light field (LF), which describes the light rays that emanate at each point in a scene, can be used as a six-degrees-of-freedom (6DOF) immersive media. Similar to the traditional multiview video, LF is also captured by an array of cameras, leading to a large data volume that needs to be streamed from a server to users. When a user wishes to watch the scene from a viewpoint that no camera has captured directly, a virtual viewpoint must be rendered in real time from the directly captured viewpoints. This places high requirements on both the computing and caching capabilities of the infrastructure. Edge computing (EC), which brings computation resources closer to users, can be a promising enabler for real-time LF viewpoint rendering. In this article, we present a novel EC-assisted mobile LF delivery framework that is able to cache parts of LF viewpoints in advance and render the requested virtual viewpoints on demand at the edge node or user’s device. Numerical results demonstrate that the proposed framework can reduce the average service response latency by 45% and the energy consumption of user equipment by 60% at the cost of 55% additional caching consumption of edge nodes.","PeriodicalId":13240,"journal":{"name":"IEEE MultiMedia","volume":"30 1","pages":"18-27"},"PeriodicalIF":3.2,"publicationDate":"2023-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47837102","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
over-rainbow-podcast-hHoriz-ks.indd over-rainbow-podcast-hHoriz-ks.indd
IF 3.2 4区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2023-04-01 DOI: 10.1109/mmul.2023.3280676
{"title":"over-rainbow-podcast-hHoriz-ks.indd","authors":"","doi":"10.1109/mmul.2023.3280676","DOIUrl":"https://doi.org/10.1109/mmul.2023.3280676","url":null,"abstract":"","PeriodicalId":13240,"journal":{"name":"IEEE MultiMedia","volume":"1 1","pages":""},"PeriodicalIF":3.2,"publicationDate":"2023-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62402338","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Specular Detection and Rendering for Immersive Multimedia 沉浸式多媒体的镜面反射检测与绘制
IF 3.2 4区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2023-04-01 DOI: 10.1109/MMUL.2023.3262195
The Van Le, Yong-hoon Choi, Jin Young Lee
Immersive multimedia has received a lot of attention because of its huge impact on user experience. To realize high immersion in virtual environments, many virtual views should be generated at arbitrary viewpoints with advanced display devices. However, specular regions, which affect user experience, have not been fully investigated in an immersive multimedia field. In this article, we propose specular highlight detection and rendering methods to improve immersion. For specular detection, a high-performance variational attention U-network (VAUnet), which combines a variational autoencoder and a spatial attention mechanism, is proposed with a hybrid loss function. The specular regions detected from VAUnet are compressed with an immersive video coding standard (MPEG-I), and then the rendering is performed by considering the decompressed specular regions. Extensive experiments demonstrate that the proposed method improves specular detection performance and subjective rendering quality.
沉浸式多媒体因其对用户体验的巨大影响而受到广泛关注。为了在虚拟环境中实现高度沉浸感,需要使用先进的显示设备在任意视点生成许多虚拟视图。然而,影响用户体验的镜面区域在沉浸式多媒体领域尚未得到充分研究。在本文中,我们提出了镜面高光检测和渲染方法来提高沉浸感。针对镜面检测,提出了一种结合变分自编码器和空间注意机制的高性能变分注意力u网络(VAUnet),并结合了混合损失函数。利用沉浸式视频编码标准(MPEG-I)对从VAUnet中检测到的高光区域进行压缩,然后根据压缩后的高光区域进行渲染。大量的实验表明,该方法提高了镜面检测性能和主观渲染质量。
{"title":"Specular Detection and Rendering for Immersive Multimedia","authors":"The Van Le, Yong-hoon Choi, Jin Young Lee","doi":"10.1109/MMUL.2023.3262195","DOIUrl":"https://doi.org/10.1109/MMUL.2023.3262195","url":null,"abstract":"Immersive multimedia has received a lot of attention because of its huge impact on user experience. To realize high immersion in virtual environments, many virtual views should be generated at arbitrary viewpoints with advanced display devices. However, specular regions, which affect user experience, have not been fully investigated in an immersive multimedia field. In this article, we propose specular highlight detection and rendering methods to improve immersion. For specular detection, a high-performance variational attention U-network (VAUnet), which combines a variational autoencoder and a spatial attention mechanism, is proposed with a hybrid loss function. The specular regions detected from VAUnet are compressed with an immersive video coding standard (MPEG-I), and then the rendering is performed by considering the decompressed specular regions. Extensive experiments demonstrate that the proposed method improves specular detection performance and subjective rendering quality.","PeriodicalId":13240,"journal":{"name":"IEEE MultiMedia","volume":"30 1","pages":"38-47"},"PeriodicalIF":3.2,"publicationDate":"2023-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47676572","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IEEE Computing Edge IEEE Computing Edge
IF 3.2 4区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2023-04-01 DOI: 10.1109/mmul.2023.3280684
{"title":"IEEE Computing Edge","authors":"","doi":"10.1109/mmul.2023.3280684","DOIUrl":"https://doi.org/10.1109/mmul.2023.3280684","url":null,"abstract":"","PeriodicalId":13240,"journal":{"name":"IEEE MultiMedia","volume":" ","pages":""},"PeriodicalIF":3.2,"publicationDate":"2023-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46264642","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Blockchain-Empowered Privacy-Preserving Digital Object Trading in the Metaverse 在虚拟世界中使用区块链保护隐私的数字对象交易
IF 3.2 4区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2023-04-01 DOI: 10.1109/MMUL.2023.3246528
Yao Xiao, Lei Xu, Can Zhang, Liehuang Zhu, Yan Zhang
The metaverse is an advanced digital world where users can have interactive and immersive experiences. Users enter the metaverse through digital objects created by extended reality and digital twin technologies. The ownership issue regarding these digital objects can be solved by the blockchain-based nonfungible token (NFT), which is of vital importance for the economics of the metaverse. Users can utilize NFTs to engage in various social and economic activities. However, current NFT protocols expose the owner’s information to the public, which may contradict with the privacy requirement. In this article, we propose a protocol, NFTPrivate, that can realize anonymous and confidential trading of digital objects. The key idea is to utilize cryptographic commitments to hide users’ addresses. By constructing proper zero-knowledge proofs, the owner can initiate privacy-preserving yet publicly verifiable transactions. Illustrative results show that the proposed protocol has higher computation and storage overhead than traditional NFT protocols. We think this is an acceptable compromise for privacy protection.
元宇宙是一个先进的数字世界,用户可以在这里获得互动和身临其境的体验。用户通过扩展现实和数字孪生技术创建的数字对象进入元宇宙。关于这些数字对象的所有权问题可以通过基于区块链的非无形代币(NFT)来解决,这对元宇宙的经济至关重要。用户可以利用NFT从事各种社会和经济活动。然而,目前的NFT协议将所有者的信息暴露给公众,这可能与隐私要求相矛盾。在本文中,我们提出了一种协议,即NFTPrivate,它可以实现数字对象的匿名和保密交易。关键思想是利用加密承诺来隐藏用户的地址。通过构建适当的零知识证明,所有者可以发起保护隐私但可公开验证的交易。仿真结果表明,该协议比传统的NFT协议具有更高的计算和存储开销。我们认为这是一个可以接受的隐私保护折衷方案。
{"title":"Blockchain-Empowered Privacy-Preserving Digital Object Trading in the Metaverse","authors":"Yao Xiao, Lei Xu, Can Zhang, Liehuang Zhu, Yan Zhang","doi":"10.1109/MMUL.2023.3246528","DOIUrl":"https://doi.org/10.1109/MMUL.2023.3246528","url":null,"abstract":"The metaverse is an advanced digital world where users can have interactive and immersive experiences. Users enter the metaverse through digital objects created by extended reality and digital twin technologies. The ownership issue regarding these digital objects can be solved by the blockchain-based nonfungible token (NFT), which is of vital importance for the economics of the metaverse. Users can utilize NFTs to engage in various social and economic activities. However, current NFT protocols expose the owner’s information to the public, which may contradict with the privacy requirement. In this article, we propose a protocol, NFTPrivate, that can realize anonymous and confidential trading of digital objects. The key idea is to utilize cryptographic commitments to hide users’ addresses. By constructing proper zero-knowledge proofs, the owner can initiate privacy-preserving yet publicly verifiable transactions. Illustrative results show that the proposed protocol has higher computation and storage overhead than traditional NFT protocols. We think this is an acceptable compromise for privacy protection.","PeriodicalId":13240,"journal":{"name":"IEEE MultiMedia","volume":"30 1","pages":"81-90"},"PeriodicalIF":3.2,"publicationDate":"2023-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47695734","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Edge Intelligence-Empowered Immersive Media 边缘智能授权沉浸式媒体
IF 3.2 4区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2023-04-01 DOI: 10.1109/MMUL.2023.3247574
Zhi Wang, Jiangchuan Liu, Wenwu Zhu
Recent years have witnessed many immersive media services and applications, ranging from 360° video streaming to augmented and virtual reality (VR) and the recent metaverse experiences. These new applications usually have common features, including high fidelity, immersive interaction, and open data exchange between people and the environment. As an emerging paradigm, edge computing has become increasingly ready to support these features. We first show that a key to unleashing the power of edge computing for immersive multimedia is handling artificial intelligence models and data. Then, we present a framework that enables joint accuracy- and latency-aware edge intelligence, with adaptive deep learning model deployment and data streaming. We show that not only conventional mechanisms such as content placement and rate adaptation but also the emerging 360° and VR streaming can benefit from such edge intelligence.
近年来出现了许多沉浸式媒体服务和应用,从360°视频流到增强和虚拟现实(VR)以及最近的虚拟世界体验。这些新应用程序通常具有共同的特性,包括高保真度、沉浸式交互以及人与环境之间的开放数据交换。作为一种新兴的范例,边缘计算已经越来越准备好支持这些功能。我们首先展示了为沉浸式多媒体释放边缘计算力量的关键是处理人工智能模型和数据。然后,我们提出了一个框架,通过自适应深度学习模型部署和数据流,实现联合精度和延迟感知边缘智能。我们表明,不仅传统的机制,如内容放置和速率适应,而且新兴的360°和VR流媒体也可以从这种边缘智能中受益。
{"title":"Edge Intelligence-Empowered Immersive Media","authors":"Zhi Wang, Jiangchuan Liu, Wenwu Zhu","doi":"10.1109/MMUL.2023.3247574","DOIUrl":"https://doi.org/10.1109/MMUL.2023.3247574","url":null,"abstract":"Recent years have witnessed many immersive media services and applications, ranging from 360° video streaming to augmented and virtual reality (VR) and the recent metaverse experiences. These new applications usually have common features, including high fidelity, immersive interaction, and open data exchange between people and the environment. As an emerging paradigm, edge computing has become increasingly ready to support these features. We first show that a key to unleashing the power of edge computing for immersive multimedia is handling artificial intelligence models and data. Then, we present a framework that enables joint accuracy- and latency-aware edge intelligence, with adaptive deep learning model deployment and data streaming. We show that not only conventional mechanisms such as content placement and rate adaptation but also the emerging 360° and VR streaming can benefit from such edge intelligence.","PeriodicalId":13240,"journal":{"name":"IEEE MultiMedia","volume":"30 1","pages":"8-17"},"PeriodicalIF":3.2,"publicationDate":"2023-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41365093","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
AN.general_hHalf_jz.indd AN.general_hHalf_jz.indd
IF 3.2 4区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2023-04-01 DOI: 10.1109/mmul.2023.3280677
{"title":"AN.general_hHalf_jz.indd","authors":"","doi":"10.1109/mmul.2023.3280677","DOIUrl":"https://doi.org/10.1109/mmul.2023.3280677","url":null,"abstract":"","PeriodicalId":13240,"journal":{"name":"IEEE MultiMedia","volume":" ","pages":""},"PeriodicalIF":3.2,"publicationDate":"2023-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43772303","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE MultiMedia
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1